• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: IOAudioFamily : driver structure
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: IOAudioFamily : driver structure


  • Subject: Re: IOAudioFamily : driver structure
  • From: Jeff Moore <email@hidden>
  • Date: Mon, 20 Sep 2004 12:13:02 -0700

To clarify a bit, the design flexibility is not a myth. It is a feature. We encourage hardware developers to expose their devices in the most natural way they can. But in doing so, you have to understand what you are doing and how your design is going to be represented to applications. In Phil's case, he made a few key design mistakes at the beginning due to not understanding this relationship.

Basically, it all boils down to timing. Each IOAudioEngine represents a unique time source. The IOAudioStreams attached to the IOAudioEngine have to all be synchronized such that the time stamps the IOAudioEngine drops describes all the streams.

From that, the HAL, whose definition of an AudioDevice is (not coincidentally) a set of streams and a single time source that governs them, will create an AudioDevice for each IOAudioEngine it encounters and present all the streams together to apps in their IOProcs.

It should also be said that most applications don't know what to do with multiple AudioDevices in terms of handling synchronized simultaneous IO using them. They do not have the capacity to look at the separate time streams and correlate them to do synchronized simultaneous IO. Consequently, they will only do IO with a single device.

If you choose present your device as multiple engines, you should be prepared for many apps not being able to use your other engines. For some devices, this is fine and dandy and achieves what the device developer is looking for. For other devices, like Phil's for instance, it results in an unusable driver that doesn't come close to achieving the desired results.

You, being the driver developer, need to figure out exactly what you want to present to applications and then use the tools the IOAudio Family provides to achieve that.

On Sep 20, 2004, at 10:26 AM, Phil Montoya wrote:

The single full duplex engine is more efficient and provides you with a more compatible driver. However in order to use a single engine it is critical that your hardware provide a single time stamp for all of the streams (both input and output). There is just one call to start and stop the engine and the timing is shared among all streams.

Initially I created two engines one for input and the other for output mainly because our hardware had separate wrap interrupts for the input and output buffers. I discovered this driver worked in most applications but there were some applications that assumed the output device was full duplex and could also do input. Logic Platinum 6 is one example, it never saw the input engine or input stream.

I rewrote the driver to have a single engine and life is better.

The design flexibility you have in core audio is somewhat of a myth and you can get yourself coded into a corner even if you do something that is totally within the design guidelines but not really supported by applications that use core audio. To me, it made perfect sense to have separate engines with separate start/stop/and time stamping for each stream since that is how our hardware was modeled. But if you use the single engine model the assumption is that the hardware's input and output are somehow synchronized and buffers are the same size. Fortunately using an interrupt that fires every buffers worth of samples and some offset calculations we could make this model work.

-Phil


On Sep 20, 2004, at 1:51 AM, nick wrote:

Hi,

I've noticed that some drivers create one AudioEngine with multiple streams, whilst others create multiple audio engines with 1 audio stream each.

Is there *any* advantage to having only one AudioEngine?
Does it have *any* subtle effects (eg performance for duplex audio i/o)?


I'd like to know before a commit too much code to any one design!

Cheers,
Nick

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


--

Jeff Moore
Core Audio
Apple

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: IOAudioFamily : driver structure
      • From: Chris Thomas <email@hidden>
    • Re: IOAudioFamily : driver structure
      • From: Phil Montoya <email@hidden>
References: 
 >IOAudioFamily : driver structure (From: nick <email@hidden>)
 >Re: IOAudioFamily : driver structure (From: Phil Montoya <email@hidden>)

  • Prev by Date: Re: MatrixMixer Pulling
  • Next by Date: Re: MatrixMixer Pulling
  • Previous by thread: Re: IOAudioFamily : driver structure
  • Next by thread: Re: IOAudioFamily : driver structure
  • Index(es):
    • Date
    • Thread