Re: handing audio dropouts with synchronized input and output
Re: handing audio dropouts with synchronized input and output
- Subject: Re: handing audio dropouts with synchronized input and output
- From: christopher raphael <email@hidden>
- Date: Tue, 07 Aug 2012 22:25:47 -0400
I seem to have started this thread, though I'm having trouble following it at this point. I have an implementation of bidirectional audio that I am not thrilled with since it has occasional dropouts and has two callbacks. Furthermore, I didn't know I needed to compensate for differences in sampling rate of input and output, so there is another weakness.
Ideally, I would like to have a single callback and get synchronized audio streams that don't drift apart. I have found that ASIO does this pretty reliably in Windows, and am just trying to get equivalent behavior on the Mac. This is what I am used to. Stephane was nice enough to send a couple of examples that might do this. I looked though the Faust project example tonight and found a very large amount of code devoted to getting single-callback duplex audio. I wonder if this really needs to be so complicated ...
My most basic question is whether or not it is worth doing this with a single callback, since this seems to be a lot of trouble. I can do the resampling myself, though I guess I would also be responsible for computing the difference in sample rates to make this possible. How far off could the two sampling rates be? If it amounts to less than a millisecond drift per minute, I could just ignore this. If I just go ahead with the two callbacks is there some danger lurking around that may be a problem for me? For instance, I haven't been able to explain why I still get occasional audio glitches, and wonder if the two callbacks could be part of the problem.
Or perhaps getting the single-callback duplex audio is simple and I just haven't found the right coding example to follow. ?
Chris
On Tue, Aug 7, 2012 at 8:34 PM, Brian Willoughby
<email@hidden> wrote:
On Aug 7, 2012, at 16:05, Brian Willoughby wrote:
On Aug 7, 2012, at 12:51, Paul Davis wrote:
On Tue, Aug 7, 2012 at 3:44 PM, Jeff Smith <email@hidden> wrote:
>Why are you using 2 different callbacks for input and output?
>What exact part of the CoreAudio API are you using?
>
>(duplex applications can perfectly be managed with a single
>callback that receive input buffers and have to produce output
>buffers... this is usually simpler to develop)
In order to that, doesn't it have to be a single device (PPC) or an aggregate device (Intel)?
there are plenty of duplex devices on Intel OS X, just not the builtin audio device. almost all pro- and prosumer audio interfaces ship with a duplex driver. its still a mystery to my why apple refused to provide this for the builtin HDA interface.
Apple have not "refused" to provide this; it's a matter of the nature of USB Audio.
CoreAudio requires that input and output share the same clock reference in order to be a duplex device. The USB Audio Device specification does not seem to allow for this, and thus all USB Audio devices are seen as pairs of input-only and output-only devices. Shipping a driver requires the user to install something first, whereas class-compliant devices need no driver.
Basically, Apple's entire audio design presents the hardware to the rest of the system as it actually operates, without the overhead of any translation to some ideal or convenient standard. If you want to treat a USB audio device as a duplex device, then your software needs to arrange for the CoreAudio (or other) sample rate conversion (AudioConverter) to be inserted at the appropriate point (input or output) so that both can be treated as if they had the same clock source even though they do not. The advantage of Apple's approach is that you do not get the distortion of SRC unless you ask for it, and you can control whether it happens on output or input. An installed driver would not have this flexibility.
I just realized that I may have overstated the situation. I'm not absolutely certain that USB Audio never allows for a shared clock between input and output, I just recall reading that it is either incredibly uncommon, or perhaps impossible due to the nature of the Descriptors. But I must admit that I have not specifically researched the limits.
Another thing is that I should have mentioned that it is still possible to handle input and output from the same callback, provided that the AUGraph includes an AudioConverter AudioUnit to match the sample rates. When recording, I either shut down the output or place the AudioConverter on the output, so that the recorded input is not distorted by SRC. For playback, though, it might be better to favor bit-transparent output.
Brian Willoughby
Sound Consulting
______________________________
_________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (
email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to
email@hidden
--
Prof. Christopher Raphael
School of Informatics and Computing
Indiana Univ.
812-856-1849
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden