Re: IOAudioFamily : driver structure
Re: IOAudioFamily : driver structure
- Subject: Re: IOAudioFamily : driver structure
- From: Phil Montoya <email@hidden>
- Date: Mon, 20 Sep 2004 14:24:13 -0700
The first rendition of our core audio driver achieved exactly the
results we intended and it worked as expected using Final Cut Pro. Now
that we had a "core audio" driver it was our assumption that other apps
who used core audio would work with our driver. To our dismay we
discovered this isn't necessarily so. We then took the extra step to
fix this so that our driver could be as compatible as possible with as
many applications as possible. To say we had an unusable driver is
inaccurate it actually worked very well with the application it was
written for. It just didn't work well with everything.
The moral of the story is that you can have a perfectly working core
audio driver that doesn't work with all core audio applications. And
if the assumption is a native full duplex engine then your hardware
better cooperate or this isn't possible.
-Phil
On Sep 20, 2004, at 12:13 PM, Jeff Moore wrote:
To clarify a bit, the design flexibility is not a myth. It is a
feature. We encourage hardware developers to expose their devices in
the most natural way they can. But in doing so, you have to understand
what you are doing and how your design is going to be represented to
applications. In Phil's case, he made a few key design mistakes at the
beginning due to not understanding this relationship.
Basically, it all boils down to timing. Each IOAudioEngine represents
a unique time source. The IOAudioStreams attached to the IOAudioEngine
have to all be synchronized such that the time stamps the
IOAudioEngine drops describes all the streams.
From that, the HAL, whose definition of an AudioDevice is (not
coincidentally) a set of streams and a single time source that governs
them, will create an AudioDevice for each IOAudioEngine it encounters
and present all the streams together to apps in their IOProcs.
It should also be said that most applications don't know what to do
with multiple AudioDevices in terms of handling synchronized
simultaneous IO using them. They do not have the capacity to look at
the separate time streams and correlate them to do synchronized
simultaneous IO. Consequently, they will only do IO with a single
device.
If you choose present your device as multiple engines, you should be
prepared for many apps not being able to use your other engines. For
some devices, this is fine and dandy and achieves what the device
developer is looking for. For other devices, like Phil's for instance,
it results in an unusable driver that doesn't come close to achieving
the desired results.
You, being the driver developer, need to figure out exactly what you
want to present to applications and then use the tools the IOAudio
Family provides to achieve that.
On Sep 20, 2004, at 10:26 AM, Phil Montoya wrote:
The single full duplex engine is more efficient and provides you with
a more compatible driver. However in order to use a single engine it
is critical that your hardware provide a single time stamp for all of
the streams (both input and output). There is just one call to start
and stop the engine and the timing is shared among all streams.
Initially I created two engines one for input and the other for
output mainly because our hardware had separate wrap interrupts for
the input and output buffers. I discovered this driver worked in
most applications but there were some applications that assumed the
output device was full duplex and could also do input. Logic
Platinum 6 is one example, it never saw the input engine or input
stream.
I rewrote the driver to have a single engine and life is better.
The design flexibility you have in core audio is somewhat of a myth
and you can get yourself coded into a corner even if you do something
that is totally within the design guidelines but not really supported
by applications that use core audio. To me, it made perfect sense
to have separate engines with separate start/stop/and time stamping
for each stream since that is how our hardware was modeled. But if
you use the single engine model the assumption is that the hardware's
input and output are somehow synchronized and buffers are the same
size. Fortunately using an interrupt that fires every buffers worth
of samples and some offset calculations we could make this model
work.
-Phil
On Sep 20, 2004, at 1:51 AM, nick wrote:
Hi,
I've noticed that some drivers create one AudioEngine with multiple
streams, whilst others create multiple audio engines with 1 audio
stream each.
Is there *any* advantage to having only one AudioEngine?
Does it have *any* subtle effects (eg performance for duplex audio
i/o)?
I'd like to know before a commit too much code to any one design!
Cheers,
Nick
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden