CoreAudio rewrite of SndPlayDoubleBuffer
CoreAudio rewrite of SndPlayDoubleBuffer
- Subject: CoreAudio rewrite of SndPlayDoubleBuffer
- From: Colin Klipsch <email@hidden>
- Date: Wed, 26 Jun 2002 21:38:55 -0400
Greetings.
My Cocoa program needs to produce continuous sound whose samples are
computed in real time. I am essentially trying to duplicate the
behavior of SndPlayDoubleBuffer from the classic MacOS -- or similarly,
the effect you can get by sending an endless sequence of 'buffer' and
'callback' commands to a SndChannel, ping-ponging in alternation between
two buffers of samples.
I have some rules about the sound: it must be played at 22050 Hz, must
be 8-bit, unsigned (128-centered), linear (not logarithmic), and
monophonic (not stereo).
By mimicking code that I found in the Developer/Examples subtree, I am
nearly there. Specifically, I refer you to the file
'UsingDefaultNoAC.cpp', way down in the CoreAudio example projects, in
which file lies code that computes and plays a sine wave in real time.
I've understood most of this code, but a few questions remain despite my
scouring through the documentation. (CoreAudio, it seems, has really
meager online documentation. Apologies though if I've somehow missed
it.)
I am using the following code to set up the sound:
static AudioUnit gAudioOut; //globally accessible
AudioStreamBasicDescription format;
AudioUnitInputCallback input;
input.inputProc = &MyRenderer; //external function
input.inputProcRefCon = NULL;
format.mSampleRate = 22050.;
format.mBytesPerFrame = 1; //bytes per sound sample?
format.mFramesPerPacket = 1; //???
format.mBytesPerPacket = 1; //always product of previous two?
format.mChannelsPerFrame = 1; //isn't this redundant also?
format.mBitsPerChannel = 8;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags =
kLinearPCMFormatFlagIsBigEndian |
kLinearPCMFormatFlagIsPacked ;
OpenDefaultAudioOutput(&gAudioOut);
AudioUnitInitialize(gAudioOut);
AudioUnitSetProperty(gAudioOut,
kAudioUnitProperty_SetInputCallback, kAudioUnitScope_Global,
0, &input, sizeof(input));
AudioUnitSetProperty(gAudioOut,
kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,
0, &format, sizeof(format));
AudioOutputUnitStart(gAudioOut);
The function 'MyRenderer' isn't shown, but at runtime it is working as
expected. And yes, I know I need to do some error checking above. I'll
put that in later, I promise.
My questions then are as follows:
[1] I assume a 'frame' here must be what I've been calling a 'sound
sample'. Is that correct? But then what's a 'packet'? I'm guessing a
packet contains one frame for each channel of sound?
[2] Are the above fields in the 'format' structure set correctly for
single channel, 8-bit unsigned linear, 22050 Hz sound?
[3] How does one control the buffer size? My function 'MyRenderer'
always receives buffers of size 256. (The documentation doesn't seem to
say what the size will be; I found it out empirically.) I have a
specific buffer size other than 256 that I'd like to use, if that's
allowed. Also, can I supply my own buffers, or am I restricted to
whatever the OS passes to me?
[4] Taking a step back, is CoreAudio the way to go for this whole
process anyway? I don't need Carbon compatibility, if that's the
purpose CoreAudio serves. I'd be happy to use whatever API gets me
closest to OS X's audio hardware. CoreAudio seems to be the way to
achieve this anyway, but I thought I'd check with someone.
If it makes a difference to any of the above, my development environment
is MacOS 10.1.5 with the April 2002 Developer Tools.
Thanks in advance for any info or advice.
-- Colin K.
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.