Re: AudioConverterNew() and 'perm' error on iPhone
Re: AudioConverterNew() and 'perm' error on iPhone
- Subject: Re: AudioConverterNew() and 'perm' error on iPhone
- From: "Roni Music" <email@hidden>
- Date: Fri, 14 Nov 2008 11:23:30 +0100
snip...
I need to get access to the decoded audio data for further DSP
processing so I can't use the AudioQueue API.
Or is it possible to do this using the AudioQueue API?
AudioQueueRenderOffline is meant to enable this, but we are still looking
at some issues here - I don't have any update to this at this stage
Do you mean there is a problem when using AudioQueueRenderOffline
on the iPhone or also on Mac OS X?
To learn how to use it, I tested AudioQueueRenderOffline based on
aqplay from your AudioQueueTools sample. This is on OS X, not the iPhone.
Can't get it to work and it seems I'm in good company by looking in the
archives
and would need some clarifications.
I replaced the AudioQueueStart() section with the code below.
Here is what I do:
1. create an audio channel layout
2. set up the output format: 44100 Hz, 16-bit, interleaved stereo, in this
case same as the input file
3. call AudioQueueSetOfflineRenderFormat
4. create a new output buffer using AudioQueueAllocateBuffer
5. create a AudioTimeStamp
6. call AudioQueueStart (it isn't clear if this is needed but without this
it's just silence)
7. read from the input file and and enqueue a new buffer by calling
AQTestBufferCallback() from the sample
8. call AudioQueueOfflineRender() to get the output data
9. increase the absolute sample frame time for the AudioTimeStamp, then
repeat 7,8, and 9 a couple of times
The first AudioQueueOfflineRender() call gives correct audio data back but
after that is just silence.
I'm probably doing something wrong and would need some help.
Thanks,
Rolf
////////////////////////
code snippet from main() in aqplay.ccp
....
// set the volume of the queue
XThrowIfError (AudioQueueSetParameter(myInfo.mQueue,
kAudioQueueParam_Volume, volume), "set queue volume");
XThrowIfError (AudioQueueAddPropertyListener (myInfo.mQueue,
kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, NULL), "add
listener");
// here is my off line rendering code
#if TEST_OFFLINE
// Set the output layout -> stereo
AudioChannelLayout layout;
memset(&layout, 0, sizeof(layout));
layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
// Set the output format -> 44100 Hz, 16-bit, stereo (same as the input
file used for testing)
CAStreamBasicDescription outFmt;
outFmt.mSampleRate = 44100.0;
outFmt.mFormatID = kAudioFormatLinearPCM;
outFmt.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;
outFmt.mFramesPerPacket = 1;
outFmt.mChannelsPerFrame = 2;
outFmt.mBytesPerFrame = sizeof(SInt16) * outFmt.mChannelsPerFrame;
outFmt.mBytesPerPacket = outFmt.mBytesPerFrame * outFmt.mFramesPerPacket;
outFmt.mBitsPerChannel = (outFmt.mBytesPerFrame / outFmt.mChannelsPerFrame)
* 8;
// change to offline rendering mode
err = AudioQueueSetOfflineRenderFormat(myInfo.mQueue, &outFmt, &layout);
// create an outbuffer
AudioQueueBufferRef outBuffer;
UInt32 outBufferByteSize = bufferByteSize; // same size as inbuffers
XThrowIfError(AudioQueueAllocateBuffer(myInfo.mQueue, outBufferByteSize,
&outBuffer), "AudioQueueAllocateBuffer failed");
// create a timestamp
AudioTimeStamp timeStamp;
timeStamp.mFlags = kAudioTimeStampSampleTimeValid;
timeStamp.mSampleTime = 0;
myInfo.mCurrentPacket = 0; // read infile from start
// it seems the audio queue should bestarted also in offline mode, else we
get silence only
XThrowIfError(AudioQueueStart(myInfo.mQueue, NULL), "AudioQueueStart
failed");
// write the raw data to a file
static const char* outFile = "audio.raw";
FILE *fp = fopen(outFile, "w+");
for (int i = 0; i < 10; i++)
{
// use the input buffers
int whatBuffer = i % kNumberBuffers;
// it seems the callback is not automatically called in off line mode, so
read the infile and enqueue the data
AQTestBufferCallback(&myInfo, myInfo.mQueue, myInfo.mBuffers[whatBuffer]);
// the amount of data enqueued by the "callback"
UInt32 inNumberFrames =
myInfo.mDataFormat.BytesToFrames(myInfo.mBuffers[whatBuffer]->mAudioDataByteSize);
// request the same number of samples frames -> inNumberFrames
// this call always renders the whole buffersize, the value of
inNumberFrames doesn't matter. why??
err = AudioQueueOfflineRender(myInfo.mQueue,
&timeStamp,
outBuffer,
inNumberFrames);
// the number of frames we got
UInt32 outNumberFrames =
outFmt.BytesToFrames(outBuffer->mAudioDataByteSize);
// write to the file so we can listen to the result
UInt32 nSamples = outNumberFrames*outFmt.NumberChannels();
SInt16 *pBuf = (SInt16 *)outBuffer->mAudioData;
fwrite(pBuf, 1, nSamples * sizeof(SInt16), fp);
// increase the absolute sample frame time, is this correct??
timeStamp.mSampleTime += inNumberFrames;
}
fclose(fp);
#else
// lets start playing now - stop is called in the AQTestBufferCallback
when there's
// no more to read from the file
XThrowIfError(AudioQueueStart(myInfo.mQueue, NULL), "AudioQueueStart
failed");
do
{
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, false);
} while (!myInfo.mDone /*|| gIsRunning*/);
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 1, false);
#endif
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden