Re: CAPlayThrough questions
Re: CAPlayThrough questions
- Subject: Re: CAPlayThrough questions
- From: Eyal Redler <email@hidden>
- Date: Sat, 17 Oct 2009 02:54:00 +0200
Thanks, I'll try this. It raises another two questions:
1. How do I go about creating an aggregate device programmatically?
What type of thing is it?
2. Why does using an aggregate device shortens the delay, in my tests
I was doing playthrough between the input and output of the same
device and as far as I understand the work that's being done is still
the same (same buffer sizes etc.)
Thanks
Eyal
On Oct 17, 2009, at 2:08 AM, William Stewart wrote:
AULab uses aggregate devices, whereas the play through code uses a
varispeed and a buffer in between the two devices
The advantage of the aggregation is exactly as you have noted = you
get lower delay because the HAL is managing this for you (but it
uses a simpler mode of resampling to keep the clocks synchronised).
The other current limitation of aggregate devices is that all the
devices you want to use have to be at the same sample rate. the play
through technique can deal with mismatched sample rates
You can test this out yourself by doing the following:
- create an aggregate device in AMS (Audio MIDI Setup utility)
- in your app, use ONE instance of AUHAL - you point it to the
aggregate device.
To do playthrough, you connect the output of bus 1 of AUHAL to the
input of bus 0 of AUHAL - of course, AULab has a more complex mix
chain between this connection, but that's essentially what happens.
You can add audio units between the input and output if you want to
do effects processing, etc.
Bill
On Oct 16, 2009, at 4:41 PM, Eyal Redler wrote:
Thanks for the answer Bill, I have another one.
I've adapted CAPlaythrough to integrate with my code (mostly just
to make sure I understand all the code) and things seems to be
working just like the demo application. I've noticed that the
playback delay is quite significant (in both my app and the
CAPlaythrough demo). I know that some playback delay is unavoidable
but I know that other applications, AU Lab for example, achieve a
much shorter playback delay and I wonder how they do that.
From what I understand the playback delay corresponds to the input/
output buffer size but when I look at the settings I get the same
buffer size (512 frames) with AU Lab as I get with CAPlaythtough.
Thanks,
Eyal
On Oct 8, 2009, at 7:54 PM, William Stewart wrote:
CAPlayThrough is code aimed at doing I/O between 2 audio devices
that are not synchronised (or they maybe, but the worst case
assumption is that they aren't).
The Varispeed is used as way to adjust the consumption of the
input device's audio by the output device.
So, the Varispeed in CAPlayThrough does two things:
(1) if there is a difference between the sample rates of the input
and output device, the varispeed AU does a resample
- this is a setting that is made and is constant through the
lifetime of the I/O operation (presuming the devices involved
don't change)
(2) As the devices involved may NOT be synchronised, a further
adjustment is made over time, by varying the rate of playback
between the two devices. This rate adjustment is made by looking
at the rate scalar in the time stamps of the two devices. The rate
scalar describes the measured difference between the idealised
sample rate of a given device (say 44.1KHz) and the measured
sample rate of the device as it is running - which will also vary.
This adjustment is made by tweaking the rate parameter of the
varispeed.
If you don't do the corrections to the rate parameter, then the
two devices can drift apart over time.
As far as processing - I would add the processing on the output
side, and you can put effects units, etc, here to do whatever you
would like to the audio as it is played out. The effect unit would
be attached to the output unit, and would ask for as much input
data as the current output unit does.
HTH
Bill
On Oct 8, 2009, at 5:47 AM, Eyal Redler wrote:
Hi,
I want to write an Audio Unit hosting applications that will have
a graph like this:
(User Selectable Input)->(My Custom AU - not user selectable)-
>(Possibly other user selectable AU)->(User Selectable Output)
On the surface CAPlayThrough demonstrates the abilities I'm
looking for but it contains a few things that make it more
complex then what I would have intuitively think. My initial
thought was that I could just build the graph like the graph
above (input unit->my unit->some additional unit->output unit)
and let the graph run but CAPlayThrough does a lot more and I'm
not sure if the additional complexities are needed to make this
work or just to demonstrate other concepts. Specifically, I'm
wondering about the following:
1. Why is the graph split between the input an output, why not do
it like this input->varispeed->output?
2. Why use a ring buffer and not a "normal" buffer?
3. Why use the varispeed audio unit, don't the au know how to
convert the sampling rate?
TIA,
Eyal Redler
------------------------------------------------------------------------------------------------
"If Uri Geller bends spoons with divine powers, then he's doing
it the hard way."
--James Randi
www.eyalredler.com
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
@apple.com
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden