Re: Which channels are used for what?
Re: Which channels are used for what?
- Subject: Re: Which channels are used for what?
- From: William Stewart <email@hidden>
- Date: Tue, 23 May 2006 18:01:36 -0700
On 23/05/2006, at 4:39 PM, Jeff Moore wrote:
On May 23, 2006, at 4:21 PM, Steve Checkoway wrote:
William Stewart wrote:
Yes, both are variable sized structs
Good. I'm handling this correctly then.
BTW, the output unit handles all of this stuff for you.
I understand. That's what you said in the previous e-mail. You
also said that the output unit does not provide as much
information as the HAL. I really have 3 requirements: I need to
play audio with as little latency as possible. I need to survive
device format changes similarly to how iTunes does if you change
the device sample rate using AMS. I need to know when "now" is as
accurately as possible multiple times per second and at each io
proc callback. Using the audio unit, it seems like I can easily
get the first two (well, I assume the overhead of the output unit
doesn't introduce much latency). Just talking to the HAL, I have
working code that handles the first and third (at least on Tiger,
there's nothing I can do to work around the timing bug I sent so
much mail to the list about in prior systems).
Is there any documentation that answers the questions I posed? I'm
willing to do work to handle this correctly. If no such
documentation exists and I had any other audio device to test, I
could poke and prod it myself but with just the built-in device in
my G5 (and I suppose I could try my G4 as well, but I suspect it
acts roughly the same), there's not a lot to go on. I have two
channels and one stream and the mapping is clear.
IMHO, you can satisfy all your requirements with AUHAL provided you
don't mind calling AudioDeviceGetCurrentTime() in your render
callback to get the current device time. Plus, if you go with the
output AU, you won't have to worry about being resilient against
format changes or doing any of the other drudge work it takes to be
a proper HAL client.
FWIW, many apps that have very serious synchronization requirements
use AUHAL just fine. Examples include QuickTime, Final Cut, and the
DVD Player.
Also from the sound of things, you haven't really had much of a
chance to test with the various audio devices out there and the
wacky stream layouts they have. Using AUHAL will give you some
confidence that your app will do something approaching the right
thing without you having to code up all the edge and corner cases.
In the end, it _will_ save you time when you don't have to track
down that esoteric device that one of your users has that is
causing your app to do something bad.
iTunes handles format changes because it uses AUHAL (actually the
Default Device version of it) and AUHAL just takes care of this for
them (and has for quite a while now).
There's no additional latency between using the HAL directly or AUHAL
Bill
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden