Re: pointer for newbie coming from direct-sound background
Re: pointer for newbie coming from direct-sound background
- Subject: Re: pointer for newbie coming from direct-sound background
- From: Brian Willoughby <email@hidden>
- Date: Mon, 10 Mar 2008 19:21:45 -0700
Hi Tom,
The ioctl() approach is more of a Unix way of doing things. It
necessarily involves some amount of system buffering, out of band
communication of settings for sample rate, format, etc. There would
also be issues with coordinating more than one process which wants
access to the same audio device. It would be very challenging to
support applications like Logic running simultaneously with iTunes
via a limited API such as ioctl().
CoreAudio offers a couple of options for recording.
The modern and recommended avenue for recording is to use the
provided DefaultOutputAudioUnit, and use it to access the input
device. With a few exceptions (e.g. an input-only device or output-
only device), this is going to be the easiest path. Aggregate
devices can even make those exceptions easy to handle. You'll have
simple support for format conversion so that you can record the
format you desire, even if the hardware does not directly support it
on the physical interface.
The most basic avenue for recording, which has been part of CoreAudio
since the beginning, is via the Hardware Abstraction Layer (HAL).
The HAL allows you to open the input device and get more direct
access to it. You would need to manually create a AudioConverter
object to handle any format changes if the hardware does not support
the format you want. I personally prefer the HAL, because of my
perception that it is lower overhead and more direct. But it is
certainly a more difficult place to start.
Documentation for all of the above is installed with the Xcode
Developer environment - at least it's there if you install the ADC
Reference Library and keep it updated online. There is an overview
and introduction to Core Audio which mentions the HAL, Audio Units,
Audio Codecs, Audio Toolbox, MIDI Services, and Core Audio Types.
You'll need to learn where each technology fits, and then dive deeper
for details. e.g. Look under the Audio Toolbox Framework for
AudioConverter. The HAL and Audio Units have their own sections, but
the DefaultOutputAudioUnit is a very special case AU which is
provided for you, while the rest of the AU documentation is intended
mostly for people developing their own processing plugins.
/Developer/ADC Reference Library/reference/MusicAudio/idxCoreAudio-
date.html
Brian Willoughby
Sound Consulting
On Mar 10, 2008, at 18:52, Walsh, Tom wrote:
I want to write a program that records audio. I am thinking I would
open some audio source device, perhaps issue some IOCTL's, and maybe
read the sample data?
I'm not finding that level of discussion of the architecture of
CoreAudio - perhaps it's simply assumed that everyone knows these
things?
There seems to be plenty of discussion of effects filters and
processing of stored audio, but what about live audio sources like
the line input? Is this a socket in the file-system, a device
node, ...? What calls does it support (e.g. open/read/write/close/
ioctl)?
Thanks, and sorry for such basic questions,
Tom Walsh
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden