• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: A simple 'Device Through' app (need some help).
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A simple 'Device Through' app (need some help).


  • Subject: Re: A simple 'Device Through' app (need some help).
  • From: "Mikael Hakman" <email@hidden>
  • Date: Sun, 29 Jun 2008 18:23:01 +0200
  • Organization: Datakonsulten AB

On Sunday, June 29, 2008 12:46 AM, Brian Willoughby wrote:
On Jun 28, 2008, at 10:29, Andy Peters wrote:
On Jun 27, 2008, at 12:39 AM, Brian Willoughby wrote:
I recently simplified CAPlayThrough to deal with an 20-in, 22-out device (of which I only needed 7 in and 2 out). However, I've got some bad news for you: The only way to simplify things is when you have a device where input and output are not separate. Since you've already told us that input and output are separate devices (this is a USB-Audio device, right? - well, you can thank the designers of the USB specification for that harsh reality),

I have to agree with Mikael here ... there is no good reason why the ins and outs need to be part of the same "device." And in fact, you can imagine a case where the inputs run at one sample frequency and the outputs run at another.

There's a lot less disagreement here than you might think. CoreAudio handles both the case where the in and out are the same device, and when they are not. CoreAudio even supports multiple levels - direct access versus abstraction. The only thing that CoreAudio lacks is an even higher level abstraction which handles everything automatically. The complaint is that this lacking requires programmers to come up with the code themselves.


The oldest part of CoreAudio is the HAL. When programming at that direct level, you always connect separately to input and output, and thus you're always prepared to deal with the variety of hardware configurations (so long as you can detect when SRC is needed). But this requires a lot of programming which goes beyond the beginner level.

There is a lot of devices that present themselves at HAL level as having both inputs and outputs. Consequently these devices require only one IOProc - you get inputs at the same time as you put outputs. One example of such device is your own MOTU 896HD which is FW connected. Another example is M-Audio USB devices. Many other professional and semi-professional devices do the same. Whether this depends on a device being FW device and an USB (or PCI) device having special driver, I don't know. There is however one thing common to all these devices, in fact this is an axiom in digital audio processing. The whole system must be controlled by one clock only (external, internal, or signal provided) and using the _same_ SR on all its inputs and outputs. With these devices and under these circumstances, programming CoreAudio at HAL level is almost as easy as ASIO.


The 'newer' part of CoreAudio is the output audio units, which handle many things automatically, and also happen to present input together with output - but only when they come from the same device. This part of the API is much easier than the HAL, but it still stops short of handling absolutely everything automatically. This is also the recommended API for most audio programmers, especially beginners.

Easier only if your app does what these levels are designed to do and using a programming model that designers of these higher levels had in mind. If not then you are on your own which often means HAL.


CAPlayThrough provides sample code that will combine input and output, even if they are not the same device. One could argue that this should be part of the API, somehow, since this is a useful feature that is provided by other audio API. Maybe some day it will. For now, just be thankful that we're no longer forced to work exclusively in the HAL.

CAPlayThrough sample is neither simple nor very didactical. A console-based ASIO sample doing the same task is much, much simpler. It can be written as one small main function and one super-easy callback function.


This could be another example of "Be careful what you ask for." There are already situations where iTunes distorts the audio because of all the automatic conversion happening in the output audio units. Not the kind of distortion that would bother the average user listening on their build-in speakers, but certainly a problem for audiophiles who are paying top dollar to listen to 24-bit 96 kHz recordings on appropriate audio systems.

iTunes distorts audio because you can not tell it to use your audio interface exclusively and to control device's sample rate to be what the actual recording is done in. According to their own statement, this vendor does it right on iPods and Apple TV. It is incomprehensible that they don't do it on hardware used as advanced DAW. As for iTunes, it doesn't even allow you to select which interface to use!


If and when CoreAudio offers automatic play-through coalescing of USB devices that have divided input and output, this will be yet another area where lack of direct control will allow unplanned distortions. The philosophy of CoreAudio is to start with precise direct control of the hardware, and build from there. We should understand that simplicity does not come without a price. Admittedly, CoreAudio could make things a lot simpler for newbie programmers that have never written an audio program before. But there is also an advantage in having access to the precision of direct control. Competing API do not offer the same undistorted control of such a wide variety of audio hardware.

Virtually splitting one device into 2 (one for input and one for output) just because it happens to be USB (or PCI) connected is not what I call "precise direct control of the hardware".


There is simplicity and simplicity. A task such as e.g. "play this file" needs of course a simple high level API hiding all the details. But this is hardly "core" IMO. A task of filtering audio on its way from a source to a destination is on much lower level, requires much more understanding of the details, and of course needs supporting API. Until you discover that you cannot insert your filter chains into the most common system provided applications.

Accessing the bits delivered by an audio interface, likewise outputting bits to such an interface need not to be complex. On the contrary, this should be the simplest thing to do. The difficulty should be to know what to do with these bits.

BTW, do you know many USB audio interfaces using one SR for input and another SR output, or using one quartz crystal for input and another for output?


Regards/Mikael

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >A simple 'Device Through' app (need some help). (From: "Glenn McCord" <email@hidden>)
 >Re: A simple 'Device Through' app (need some help). (From: Brian Willoughby <email@hidden>)
 >Re: A simple 'Device Through' app (need some help). (From: Andy Peters <email@hidden>)
 >Re: A simple 'Device Through' app (need some help). (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: A simple 'Device Through' app (need some help).
  • Next by Date: We are also looking for a CoreAudio, OpenGL, and OpenAL developer
  • Previous by thread: Re: A simple 'Device Through' app (need some help).
  • Next by thread: Testing a mixing audio unit in AULab
  • Index(es):
    • Date
    • Thread