Trying to port my app to Core Audio. Audio Sync
Trying to port my app to Core Audio. Audio Sync
- Subject: Trying to port my app to Core Audio. Audio Sync
- From: Ralph Hill <email@hidden>
- Date: Mon, 02 Feb 2004 13:30:56 -0800
I am working on porting an existing app from Linux to OS X. In order
to port this app to OS X, a need to be able to implement this method:
AudioOutput::Play(void const*const audioBuffer, const int
numAudioFrames,
uint64_t* playTime);
Where
- audioBuffer is a buffer of audio data in PCM format (details of the
format are set in the open call)
- numAudioFrames is a count of the number of audio frames in audioBuffer
- playTime is a pointer to a unsigned 64-bit int, that receives an
estimate of the value the processor clock will have when the first
audio frame in the buffer hits the output jacks.
I am trying to figure out the best way to implement this method on top
of Core Audio. Note that the model of operation here is quite different
from Core Audio. The two key differences are:
1. my app decides when audio is to be put into the output buffer (push
model)
2. my app decides how much audio data will be put into the output buffer
The only way I can see to solve this problem is to add an extra layer
of processing, and to set the buffer size in Core Audio to be
significantly smaller than the typical amount of data my app will write
(which is around 40msec).
Here is what I propose:
1. Set the audio buffer size to be appropriate for about 10msec of
audio data.
2. Implement a ring buffer shared by the AudioOutput::Play method, and
the AudioDeviceIOProc callback. The play method puts data into this
buffer and estimates the time at which it will be played by working
from the inOutputTime value from AudioDeviceIOProc, the elapsed time
since the last call to AudioDeviceIOProc, and the amount of audio data
in the shared ring buffer. The AudioDeviceIOProc callback consumes
data from the buffer and saves the value from the inOutputTime
parameter and the current processor clock value.
Some questions/concerns:
1. My application is processor intensive and time critical. Will
setting the buffer size so small create a high load on the processor or
increase the risk of audio dropouts?
2. To me this seems like a lot of processing to do something that
should be really easy and is probably directly supported by the
hardware. Am I missing something? Is there an easier way to do this?
I also have some specific questions on the documentation in
http://developer.apple.com/audio/pdf/coreaudio.pdf:
Page 15. " the timestamp of when the first sample frame of the output
data will be consumed by the driver."
Is there any way to estimate the delay through the driver and audio
output hardware? I want to know when the audio hits the output jacks.
Page 20. "A UInt32 containing the size of the IO buffers in bytes. "
That is the size of each buffer, not the accumulated size of all the
buffers, right? How many buffers are there? Do I care?
Page 23. Description of AudioDeviceIOProc
The description does not explain the outOutputData parameter.
I am guessing that the the client procedure is supposed to fill only
the first buffer in the list. And that it must fill it completely,
i.e., the amount of data to put in the buffer has been previously
determine and is not up to the client procedure. Have I guessed
correctly?
ralph hill
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.