Re: AudioQueue getting delayed by other audio output of the system
Re: AudioQueue getting delayed by other audio output of the system
- Subject: Re: AudioQueue getting delayed by other audio output of the system
- From: Markus Hanauska <email@hidden>
- Date: Thu, 20 May 2010 18:04:47 +0200
A little update:
If I enqueue the first few buffers without timestamp and only enqueue buffers returned for re-use with timestamp, the issue remains. If no audio is currently playing, the first 16 buffers play, followed by a 100 ms gap (expected) and then the queue continues playing to the end of file. If there is sound output, the first 16 buffers play, but then there is a gap of several seconds.
I also added a property listener that confirmed the queue is internally in running state directly after the first buffers have been scheduled, so it's not that the queue delays switching to running state, it's just not playing.
I also tried the function to convert the requested start time timestamp first to the nearest possible start time and use this one as timestamp, but that has absolutely no effect on the issue. The nearest possible start time is also never several seconds away from my requested one, thus according to this function it is possible to start the buffer at my requested start time and still it won't start.
On Thursday, 2010-05-20, at 15:56, Markus Hanauska wrote:
>
> We have the following problem with Audio Queue Services:
>
> When you enqueue a buffer into an already playing AudioQueue, you can either do so with a timestamp when the buffer is supposed to be played or without any timestamp. If we enqueue the buffers without any timestamps, the queue will start playing them almost immediately, regardless if the system is currently having any other audio output (e.g. from iTunes) or not. If we enqueue them with timestamps, the system starts playing them almost exactly when the timestamps say (the timestamps are always a couple of milliseconds in the future, in my sample code up to 100), but only if there is no other audio output on the system. If iTunes is playing a song, it takes several seconds before the AudioQueue starts playing the buffer; a lot later than the timestamp has requested.
>
> If our application was just about playing audio, I would simply not use any timestamps. But the audio must be synchronized to a video and by using timestamps it is pretty easy to achieve perfect synchronization. However, as mentioned above, this only works if there is no other audio output of any other application at the moment we start the AudioQueue; otherwise the audio is way off.
>
> At the end of this question is a larger source code sample. This is a very simplified piece of code created from our production code that demonstrates the problem. Let me explain the code in a few words, it is pretty straight forward actually.
>
> 1. The code itself does not spawn any threads, all direct calls from this code are performed on the main thread.
>
> 2. The main function expects to have a single argument that is the absolute/relative file to path with raw audio data.
>
> 3. The audio data is expected to be 48 kHz / 16 Bit / Signed / Little Endian / Stereo. A pretty common audio format.
>
> 4. The main function opens the file, initialize some global variables, creates and starts an AudioQueue, fills it with initial data and runs in an endless loop until all of the file has been played.
>
> 5. The AudioQueue is created with no RunLoop reference. According to documentation this is allowed and means the AudioQueue will perform callbacks on one of its internal threads. We tried creating our own thread that runs a RunLoop and direct callbacks to this RunLoop, but that makes no difference regarding the described issue.
>
> 6. The callback function just takes the buffer, tries to fill it with as much data as possible, sets the size of the buffer and re-enqueues it back into the AudioQueue, just like Apple shows in their code samples.
>
> 7. The callback can either enqueue the buffer without a timestamp, which means according to documentation to add the buffer to the end of the queue and play it as soon as all buffers before it have been played. If the queue is empty, it means the buffer is played ASAP. Alternatively it can get the current host time, advance it by 100 ms into the future, and use this as timestamp for the first buffer. At the end it will always calculate the playtime of the last scheduled buffer in native host time and use this as timestamp for the next queued buffer. The fact that this usually works well and that there are no skips in playback shows that we calculate the timestamps correctly (if we make them too short or too long, the file will play a bit too fast or too slow, which proves that the system also really takes those timestamps into account).
>
> 8. By setting USE_TIMESTAMPS to either 0 or 1 one can regulate if timestamps are used or not when enquing buffers.
>
> 9. When read call returns zero or less, we tread this as end-of-file (reading errors are treated as end-of-file, too), setting the appropriate flag, which will cause the main thread to terminate, disposing the AudioQueue, performing some more clean-up and then the whole app terminates.
>
> Here is the code:
>
> // File: AudioPlayerTest.m
>
> #import <mach/mach_time.h>
> #import <AudioToolbox/AudioToolbox.h>
>
> #define USE_TIMESTAMPS 0
>
> // Buffers shall hold up to 31.25 ms 48 kHz 16 Bit Stereo
> #define NEW_BUFFER_CAPACITY ((48000 * 2 * 2) / 32)
>
> // Buffers for up to 500 ms audio
> #define NUM_OF_BUFFERS 16
>
> static int inputFile;
> static BOOL fileIsEOF;
> static AudioQueueRef theQueue;
> static AudioQueueBufferRef theBuffers[NUM_OF_BUFFERS];
>
> static uint64_t nextTimestamp;
> static double hostTimeToNSFactor;
> static double bytesToHostTimeFactor;
>
> static void initFactors ()
> {
> mach_timebase_info_data_t timeBase = { 0 };
> mach_timebase_info(&timeBase);
>
> hostTimeToNSFactor = ((double)timeBase.numer / timeBase.denom);
> }
>
>
> static void audioQueueCallback (
> void *inUserData,
> AudioQueueRef inAQ,
> AudioQueueBufferRef inBuffer
> ) {
> int bytesRead;
> AudioTimeStamp tstp;
>
> if (fileIsEOF) return;
>
> bytesRead = read(
> inputFile,
> inBuffer->mAudioData,
> inBuffer->mAudioDataBytesCapacity
> );
>
> if (bytesRead <= 0) {
> fileIsEOF = YES;
> return;
> }
>
> inBuffer->mAudioDataByteSize = bytesRead;
>
> if (nextTimestamp == 0) {
> // Current system time
> nextTimestamp = mach_absolute_time();
> // Advance by 100 ms
> nextTimestamp += (100 * 1000 * 1000) / hostTimeToNSFactor;
> }
>
> tstp.mHostTime = nextTimestamp;
> tstp.mFlags = kAudioTimeStampHostTimeValid;
>
> AudioQueueEnqueueBufferWithParameters(
> inAQ,
> inBuffer,
> 0,
> NULL,
> 0,
> 0,
> 0,
> NULL,
> #if USE_TIMESTAMPS
> &tstp,
> #else
> NULL,
> #endif
> NULL
> );
> nextTimestamp += (uint64_t)(bytesRead * bytesToHostTimeFactor);
> }
>
>
>
> int main (
> int argc,
> const char * argv[]
> ) {
> if (argc < 2) {
> fprintf(stderr, "No input file given\n");
> return 1;
> }
>
> inputFile = open(argv[1], O_RDONLY);
> if (inputFile < 0) {
> fprintf(stderr, "Failed to open file %s\n", argv[1]);
> return 1;
> }
>
> // Init mach_time_t conversion
> initFactors();
>
> AudioStreamBasicDescription asDesc = { 0 };
>
> asDesc.mSampleRate = 48000;
> asDesc.mFormatID = kAudioFormatLinearPCM;
> asDesc.mFormatFlags = kAudioFormatFlagIsSignedInteger;
> asDesc.mBytesPerPacket = 4;
> asDesc.mFramesPerPacket = 1;
> asDesc.mBytesPerFrame = 4;
> asDesc.mChannelsPerFrame = 2;
> asDesc.mBitsPerChannel = 16;
>
> AudioQueueNewOutput(
> &asDesc,
> &audioQueueCallback,
> NULL,
> NULL,
> NULL,
> 0,
> &theQueue
> );
>
> // x bytes equal how many host time ticks?
> bytesToHostTimeFactor = (1000 * 1000 * 1000) / hostTimeToNSFactor;
> bytesToHostTimeFactor /= asDesc.mBytesPerFrame * asDesc.mSampleRate;
>
> // Initial timestamp is always 0
> nextTimestamp = 0;
>
>
> // Create three empty buffers
> int bufCount;
> for (bufCount = 0; bufCount < NUM_OF_BUFFERS; bufCount++) {
> AudioQueueAllocateBuffer(
> theQueue,
> NEW_BUFFER_CAPACITY,
> &theBuffers[bufCount]
> );
> }
>
> fileIsEOF = NO;
>
> AudioQueueStart(theQueue, NULL);
>
> for (bufCount = 0; bufCount < NUM_OF_BUFFERS; bufCount++) {
> audioQueueCallback(NULL, theQueue, theBuffers[bufCount]);
> }
>
> while (!fileIsEOF) sleep(1);
>
> AudioQueueDispose(theQueue, true);
> close(inputFile);
> return 0;
> }
--
Best Regards,
Markus Hanauska
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden