Re: AudioFileTools
Re: AudioFileTools
- Subject: Re: AudioFileTools
- From: Ruben Perez <email@hidden>
- Date: Sun, 2 Jan 2005 09:08:39 -0800 (PST)
- Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys
Hello everybody, I've been noticing there has been a
lot of agitation regarding building the AudioFileTools
example, however one main question remains: what is
tyhe purpose (individually) for you building this
code? If you're simply trying to gain some familiarity
with the process involves in playing an audio file
through CA, there are simpler ways ( better ways ).
For one, start with a somawhat cleaner version of the
example like PlayAudioFileLite:
http://developer.apple.com/samplecode/PlayAudioFileLite/PlayAudioFileLite.html
A suggestion: try not to go for the newer versions,
keep in mind that there is a major OS upgrade coming
along (I read one of you got some undefined constant
MAC_OS_X_VERSION_10_4, does the fact that there is not
a OSX 10.4 release ring any bells?). I started with
the 'older' version of AudioFilePlay and it compiled
and ran flawlessly.
Also, keep in mind that these 'examples' are part of
the 'public utility' which in my head seems to also
read 'only a partial example'. The code isn't very
well structured, and the OOP is messy at best, like
they are figuring it out as they go too.
To me, the biggest part of the problem was
understanding what has to be done and the order in
which it has to be performed. The CoreAudio document
is horrible ( I recognize that it's still a draft )
and doesn't go over any specifics. Here is a clipping
of my personal notes, I hope it can be of help to
someone:
1) Create and set up a generic Audio output AU.
2) Obtain the AU stream descriptor (use the
kAudioUnitScope_Output, use bus 0 to get started ),
3) Set the input stream descriptor (of the AU) to the
same format as the output (so that when you pass this
descriptor as the destination for the audio converter
the format is already in the output form)
4) Open your file and set up your AudioFileID.
5) Get the appropriate stream descriptor for the file.
6) Create the audio converter, use the file stream
descriptor as the source and the INPUT audio unit
descriptor for the destination
(remember that you must have already performed #3
correctly;
7) Create the callback structure for the au render.
8) Register the render callback (
AudioUnitSetProperty, use
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input and bus 0 for this case )
9) Write your audio converter fill complex buffer call
back
10) Write the render callback, in which you basically
call AudioConverterFillComplexBuffer with whatever
buffer scheme you implemented
11) Initialize and start your AudioUnit
Keep in mind that each one of these may involve
several steps, like initializing, allocating a buffer,
etc...
Also, remember that you don't need pthreads or
multithreading programming if all you are doing is
playing the file, at the simplest level you have two
choices: 1) read the whole file into a huge buffer or
2) read smaller chunks of the audio file as you refill
the buffer ( by an amount equal to what the generic
output pull is requesting, for example ). These are
performed by setting the callbacks correctly, they
don't require any thread programming. I recommend (if
you are about to use threads) starting out by
fork()ing and pipelining so you don't get caught
solving threading bugs since the point was to solve
audio bugs.
Happy new year 2005 :)
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden