Re: Trying to port my app to Core Audio. Audio Sync
Re: Trying to port my app to Core Audio. Audio Sync
- Subject: Re: Trying to port my app to Core Audio. Audio Sync
- From: James McCartney <email@hidden>
- Date: Mon, 2 Feb 2004 15:30:47 -0800
On Feb 2, 2004, at 1:30 PM, Ralph Hill wrote:
>
I am working on porting an existing app from Linux to OS X. In order
>
to port this app to OS X, a need to be able to implement this method:
>
>
AudioOutput::Play(void const*const audioBuffer, const int
>
numAudioFrames,
>
uint64_t* playTime);
>
>
Where
>
>
- audioBuffer is a buffer of audio data in PCM format (details of the
>
format are set in the open call)
>
- numAudioFrames is a count of the number of audio frames in
>
audioBuffer
>
- playTime is a pointer to a unsigned 64-bit int, that receives an
>
estimate of the value the processor clock will have when the first
>
audio frame in the buffer hits the output jacks.
>
>
I am trying to figure out the best way to implement this method on top
>
of Core Audio. Note that the model of operation here is quite
>
different from Core Audio. The two key differences are:
>
1. my app decides when audio is to be put into the output buffer (push
>
model)
>
2. my app decides how much audio data will be put into the output
>
buffer
>
>
The only way I can see to solve this problem is to add an extra layer
>
of processing, and to set the buffer size in Core Audio to be
>
significantly smaller than the typical amount of data my app will
>
write (which is around 40msec).
>
>
Here is what I propose:
>
>
1. Set the audio buffer size to be appropriate for about 10msec of
>
audio data.
>
>
2. Implement a ring buffer shared by the AudioOutput::Play method, and
>
the AudioDeviceIOProc callback. The play method puts data into this
>
buffer and estimates the time at which it will be played by working
>
from the inOutputTime value from AudioDeviceIOProc, the elapsed time
>
since the last call to AudioDeviceIOProc, and the amount of audio data
>
in the shared ring buffer. The AudioDeviceIOProc callback consumes
>
data from the buffer and saves the value from the inOutputTime
>
parameter and the current processor clock value.
>
>
>
Some questions/concerns:
>
>
1. My application is processor intensive and time critical.
If this is strictly true, then insisting on being out of callback is
not the best approach. CoreAudio goes to a lot of work to provide low
latency and while pushing from outside is possible, it is not the way
to get the lowest latency. I would suggest that you rewrite your engine
to take advantage of what CoreAudio provides.
>
Will setting the buffer size so small create a high load on the
>
processor or increase the risk of audio dropouts?
>
>
2. To me this seems like a lot of processing to do something that
>
should be really easy and is probably directly supported by the
>
hardware. Am I missing something? Is there an easier way to do this?
If you want to push then you are going to need a ring buffer as you
proposed.
>
>
>
I also have some specific questions on the documentation in
>
http://developer.apple.com/audio/pdf/coreaudio.pdf:
The best and most up-to-date documentation is AudioHardware.h itself.
>
>
Page 15. " the timestamp of when the first sample frame of the output
>
data will be consumed by the driver."
>
>
Is there any way to estimate the delay through the driver and audio
>
output hardware? I want to know when the audio hits the output jacks.
kAudioDevicePropertyLatency = 'ltnc',
// a UInt32 containing the number of frames of latency in the device
// Note that input and output latency may differ. Further, streams
// may have additional latency so they should be queried as well.
// If both the device and the stream say they have latency, then
// the total latency for the stream is the device latency summed with
// the stream latency.
>
Page 20. "A UInt32 containing the size of the IO buffers in bytes. "
>
>
That is the size of each buffer, not the accumulated size of all the
>
buffers, right?
kAudioDevicePropertyBufferSize = 'bsiz',
// a UInt32 containing the size of the IO buffers in bytes
// This property is deprecated in favor of
kAudioDevicePropertyBufferFrameSize
i.e. don't use this property any more.
>
How many buffers are there? Do I care?
One buffer per stream.
You can get this from the buffer lists passed in to the ioProc.
mNumberBuffers.
>
Page 23. Description of AudioDeviceIOProc
>
>
The description does not explain the outOutputData parameter.
>
I am guessing that the the client procedure is supposed to fill only
>
the first buffer in the list.
There is one buffer per stream. You fill any that you want to have
output other than silence.
>
And that it must fill it completely,
yes.
-
james mccartney
apple coreaudio
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.