Re: iPhone remoteIO AUGraph
Re: iPhone remoteIO AUGraph
- Subject: Re: iPhone remoteIO AUGraph
- From: "Raleigh F. Rinehart" <email@hidden>
- Date: Mon, 22 Jun 2009 22:59:19 -0500
as far as I can tell the audio description matches the input. I use
the data returned from CoreAudio for most of the description:
AudioFileGetProperty(mAudioFile, kAudioFilePropertyDataFormat,
&propertySize, &mDataFormat);
This gets me :
Byte per Packet: 2
Frames per Packet: 1
Byte per Frame: 2
Bits Per Channel: 16
Channels per Frame: 1
I fill in the rest
mSampleRate = 44100.00;
mFormatID = kAudioFormatLinearPCM;
mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;//kAudioFormatFlagsCanonical
this should match the input file exactly. I don't see anywhere in the
asbd or related APIs where the bit rate is even mentioned, so it seems
it is up to the callback provider to take care of that.
So in looking over my callback function and how I am storing/
retrieving the data though I believe I've located the problem. It is
indeed as you indicated, wrong buffer size. I was using UInt32, once I
switched to UInt16 (as it should have been, Doh!) it works! Yah!
My call back looks like this now:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
constAudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
...
...
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
UInt16 *frameBuffer = buffer.mData;
UInt16 packet;
//UInt32 byteCount = 0;
//loop through the buffer and fill the frames
for (int j = 0; j < inNumberFrames; j++){
// get NextPacket returns a 32 bit value, one frame.
packet = [file getNextPacket];
frameBuffer[j] = packet;
}
}
//TODO return a "real" return code instead of always okay :)
return noErr;
}
with the audio packet data stored in a malloc'd UInt16 buffer.
thanks again for your help!
-raleigh
On Jun 22, 2009, at 9:26 PM, email@hidden wrote:
the bit rate issues are a bitch. it involves lots of playing with
the audio format description structures. you have to accurately
describe your input format.
and the callbacks. a good starting point is to check how many
buffers you are being asked to fill and what sizes (in bytes) the
buffers are. if the sizes you are expecting are not what you get
then the audio format descriptior is usually (in my limited
experience) to blame.
if filling every second frame makes it sound ok. then you are
probably casting the buffer to the wrong size.
Ive had this issue when i had 32 bit audio data (two 16 bit left and
right) and i was casting my audio buffer to a 16 bit int. then
filling every second frame would work because 2x16 = 32. im sure you
get the point.
anyway if you work out the panning code. hit me back. we should be
posting these messages to the core audio mailing list as well btw.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden