Re: handing audio dropouts with synchronized input and output
Re: handing audio dropouts with synchronized input and output
- Subject: Re: handing audio dropouts with synchronized input and output
- From: Stéphane Letz <email@hidden>
- Date: Tue, 07 Aug 2012 11:04:44 +0200
>
> Message: 1
> Date: Sun, 05 Aug 2012 15:00:15 -0400
> From: christopher raphael <email@hidden>
> To: email@hidden
> Subject: handing audio dropouts with synchronized input and output
> Message-ID:
> <CAKTokbZO06kckgkcr-rUmp3BuYq1a=email@hidden>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello LIst, I am working on an application that has full-duplex audio in
> which the times events are detected in the audio input will affect the
> timing of events in the output. Thus I need to be able to relate times, in
> samples of the input stream to times in samples of the output stream. This
> is not hard to do since the timestamps associated with the input and output
> callbacks give the actual sample times, so the difference in these two
> timestamps on the first input and output callbacks gives a measurement of
> the skew between these two streams.
>
> However, I don't understand how to deal with the case when I get audio
> dropouts and the relationship between the input and output streams changes.
> I have done everything I can to minimize these dropouts --- my input
> callback just copies the audio data for processing in another thread, and
> takes almost no time.
Why that?
> My output callback needs to do some processing to
> generate the samples, and this takes about 1ms.
Why are you using 2 different callbacks for input and output? What exact part of the CoreAudio API are you using?
(duplex applications can perfectly be managed with a single callback that receive input buffers and have to produce output buffers... this is usually simpler to develop)
> The time the callback
> takes is very consistent from callback to callback. For context, my
> callbacks have 1024 samples at 48kHz, so they occur about ever 20 or so ms.
> The processing by other threads is significant, but still my program only
> consumes about 20% of the processor resources on average. While this is
> not the worst case, I assume that my callbacks would still get precedence,
> so dropouts would be rare.
CoreAudio callbacks are called in a "real-time" (time-constraint in OSX terminology) thread. This thread (if programmed correctly with no call to possibly blocking functions...) is obviously preempting any other bon real-time thread.
> I get a dropout around once every 10 or 15
> minutes running with the built-in audio hardware, which is too frequent.
You should not see this kind of dropout, especially if using a buffer of 1024 frames.
>
>
> 1) Is there anything I can do to eliminate or reduce the audio dropouts.
- try to use a unique duplex callback
- analyse your real-time callback code to check that it does not contains any possibly blocking functions (taking locks, file access...etc..)
>
> 2) When I get a dropout out what I want to happen is the following: It is
> fine if the outgoing samples never make their way to the speaker and some
> noise or other random audio gets spliced in instead, as long as I preserve
> the relationship between incoming and outgoing samples that I started with.
> This seems to be the default behavior with ASIO and may result from the
> double buffering paradigm. How do I get this behavior? Any help with this
> would be greatly appreciated.
>
> Chris
>
My feeling is that you should be able to solve the dropout issue in the first place.
Regards
Stéphane Letz
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden