Re: handing audio dropouts with synchronized input and output
Re: handing audio dropouts with synchronized input and output
- Subject: Re: handing audio dropouts with synchronized input and output
- From: Stéphane Letz <email@hidden>
- Date: Wed, 08 Aug 2012 09:14:27 +0200
>
>
> Message: 8
> Date: Tue, 07 Aug 2012 22:25:47 -0400
> From: christopher raphael <email@hidden>
> To: Brian Willoughby <email@hidden>
> Cc: coreaudio-api API <email@hidden>
> Subject: Re: handing audio dropouts with synchronized input and output
> Message-ID:
> <email@hidden>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I seem to have started this thread, though I'm having trouble following it
> at this point. I have an implementation of bidirectional audio that I am
> not thrilled with since it has occasional dropouts and has two callbacks.
> Furthermore, I didn't know I needed to compensate for differences in
> sampling rate of input and output, so there is another weakness.
>
> Ideally, I would like to have a single callback and get synchronized audio
> streams that don't drift apart. I have found that ASIO does this pretty
> reliably in Windows, and am just trying to get equivalent behavior on the
> Mac. This is what I am used to. Stephane was nice enough to send a couple
> of examples that might do this. I looked though the Faust project example
> tonight and found a very large amount of code devoted to getting
> single-callback duplex audio. I wonder if this really needs to be so
> complicated ...
I agree the code is a bit complicated... the reason for that is:
- it was derived for the JACK CoreAudio driver that is indeed more complicated...
- it does dynamic separated input and output device aggregation. This is not mandatory: you can just create a duplex device using the Audio Midi Setup tool and then access this duplex device with the regular code. And just remove this aggregation code from the example.
>
> My most basic question is whether or not it is worth doing this with a
> single callback, since this seems to be a lot of trouble. I can do the
> resampling myself, though I guess I would also be responsible for computing
> the difference in sample rates to make this possible. How far off could
> the two sampling rates be? If it amounts to less than a millisecond drift
> per minute, I could just ignore this. If I just go ahead with the two
> callbacks is there some danger lurking around that may be a problem for me?
> For instance, I haven't been able to explain why I still get occasional
> audio glitches, and wonder if the two callbacks could be part of the
> problem.
>
> Or perhaps getting the single-callback duplex audio is simple and I just
> haven't found the right coding example to follow. ?
>
> Chris
>
So from the Faust example the simpler code would keep:
TCoreAudioRenderer::Render
TCoreAudioRenderer::GetDefaultDevice
TCoreAudioRenderer::OpenDefault
TCoreAudioRenderer::Close
TCoreAudioRenderer::Start
TCoreAudioRenderer::Stop
So a lot of code can be removed...
Note also that this code setup the AUHal to get a simple flat array of non-interleaved streams for both input and output (in TCoreAudioRenderer::OpenDefault), you may need to change this logic if your application prefers interleaved streams for instance.
HTH,
Stéphane Letz
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden