Re: CoreAudio driver's buffer management
Re: CoreAudio driver's buffer management
- Subject: Re: CoreAudio driver's buffer management
- From: Jeff Moore <email@hidden>
- Date: Wed, 22 Dec 2004 16:04:04 -0800
On Dec 22, 2004, at 3:20 PM, Tommy Schell wrote:
I mean to say that I'd like to do a kernel space CoreAudio driver.
My dilemma is that I already have a user space firewire driver which
pulls audio (and video) off of firewire.
I need to bring the audio over to CoreAudio, but I don't want to write
a user space CoreAudio driver, as there
is little documentation for it.
So I want to do a kernel space CoreAudio driver, with the data
provided not directly by the hardware, but by the
user space firewire driver, which interacts with the hardware (via the
firewire device interface).
Is this feasible?
Yes. You'd need to reflect the data back into your kernel driver using
shared memory. You'd most likely need to put your user-land pieces into
a daemon to properly handle the resource management.
It may be easier to invert your design and put the FireWire code in the
kernel with the audio driver push the video data out to apps via shared
memory.
Also, I can't seem to get the PhantomAudioDriver to provide dummy data
to the app. I write data into the input buffer
used in setSampleBuffer, but the app or the HAL never asks for it.
How to actually get audio data flowing up to the app?
I saw a reference in the documentation to startAudioEngine. The doc
says this must be subclassed- is that true?
PhantomAudioDriver doesn't subclass this function.
The driver doesn't push data into an application. It's the other way
around. The HAL runs a thread on behalf of the app that calls into the
driver to read/write data. The HAL also is the one that tells the
driver when to start and stop by calling the IOAudioEngine methods for
starting and stopping.
So, the short answer to your question is that the app will read the
data when it wants to. You'd have to be more specific about the
circumstances you are testing for me to give you a more precise answer
about why the input data may not be getting read.
Thanks a lot,
Tommy Schell
On Dec 21, 2004, at 1:04 PM, email@hidden
wrote:
Message: 1
Date: Mon, 20 Dec 2004 13:03:51 -0800
From: Jeff Moore <email@hidden>
Subject: Re: CoreAudio driver's buffer management
To: CoreAudio API <email@hidden>
Message-ID: <email@hidden>
Content-Type: text/plain; charset=US-ASCII; format=flowed
You'll need to be a bit more specific. If you are writing a user-space
driver, you don't have the IOAudio family's services available to you.
You are completely on your own with respect to buffer management. The
only constraints are the semantics of the HAL's API imposes.
On Dec 20, 2004, at 7:38 AM, Tommy Schell wrote:
Hi,
Suppose I have a CoreAudio driver which, instead of pulling data
directly from the hardware, received (and gave) audio data via
a user space firewire driver. That is, incoming data came from user
space down to CoreAudio driver, and outgoing data went
from CoreAudio driver up to the user space firewire driver. Would
that negatively impact buffer creation or management?
And then how exactly is a CoreAudio driver's ring buffer managed?
How
does the data get wrapped around?
Does it all happen automatically behind the scenes?
If the data came from user space, would the ring buffer be effected?
How would I trigger an interrupt when the ring buffer wraps around
(to
take a time stamp), if the CoreAudio driver isn't
dealing directly with hardware?
Thanks a lot,
Tommy Schell
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden