• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Using AudioQueues with AudioConverterFillComplexBuffer....
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Using AudioQueues with AudioConverterFillComplexBuffer....


  • Subject: Re: Using AudioQueues with AudioConverterFillComplexBuffer....
  • From: Ron Burgundy <email@hidden>
  • Date: Wed, 08 Jan 2014 09:33:31 -0700

thanks for the tips! i appreciate that. i will try those modifications. The only time i ever get static is actually when i try to use AudioUnits. with the audioqueues there is no static at all it just seems to choke on the fact that the absd is using 8000 for the sampling rate rather than the 44100 or 48000 the output unit is expecting.

Things stopped playing through after the downsample was successful because i was using

CheckError(AudioQueueNewOutput(&recordFormat,
                                       MyAQOutputCallback,
                                       (__bridge void *)self, NULL, NULL, 0,
                                       &playerQueue), "AudioOutputNewQueue failed");

rather than 

CheckError(AudioQueueNewOutput(&outFormat,
                                       MyAQOutputCallback,
                                       (__bridge void *)self, NULL, NULL, 0,
                                       &playerQueue), "AudioOutputNewQueue failed");

remembering that record format is

 recordFormat.mSampleRate = 44100;
        recordFormat.mFormatID = kAudioFormatLinearPCM;
        recordFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        recordFormat.mBitsPerChannel = 16;
        recordFormat.mChannelsPerFrame = 2;
        recordFormat.mFramesPerPacket = 1;
        recordFormat.mBytesPerPacket = 4;
        recordFormat.mBytesPerFrame = 4;
        recordFormat.mReserved = 0;
        

and outFormat is 

  outFormat.mFormatID = kAudioFormatLinearPCM;
        outFormat.mSampleRate = 8000;
        outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        outFormat.mBitsPerChannel = 16;
        outFormat.mChannelsPerFrame = 1;
        outFormat.mFramesPerPacket = 1;
        outFormat.mBytesPerPacket = 2;
        outFormat.mBytesPerFrame = 2;
        outFormat.mReserved = 0;
        

and when i changed the audioQueue to use the 8000 sampling rate mono audio thats when playthrough playback was choppy. 

but it sounds like you are saying of it the playback isnt choppy because of the sampling rate its because of my buffer sizes and when im doing the output callbacks? 

its strange because before downsampling all of this worked flawlessly, so didnt imagine there was a problem doing the allocations for the queue in the input callback. i hope im being completely clear with what im doing and when problems have arisen.

On Jan 8, 2014, at 9:22 AM, Dave Bender <email@hidden> wrote:

I feel your pain. OK here are a few things I would change:

AudioQueues are finicky things, and input callbacks should not touch output queues. Your callback is already in an AudioQueue stack frame, and you risk reentering the AudioQueue code and causing havoc. So:

-Do not schedule output in your input callback, use performSelectorOnMainThread and intermediate function to do that task
-Do not call AudioQueueAllocateBuffer in your input callback. Preallocate a pool of output buffers and use those.

As for debugging tips:
-Try to record your output sound on another machine. See if there is a pattern in how long each burst of static lasts and if it relates to your buffer size.


On Wed, Jan 8, 2014 at 10:46 AM, Ron Burgundy <email@hidden> wrote:
Thanks for your qucik response! you stay classy :)

So i DID actually figure out the problem to get downsampling working and the audio recording properly to the outputHandle file. i felt soooo dumb when i figured it out. 

one of my first issues was this line

 int bufferByteSize = [self computeRecordBufferSize:&outFormat inAudioQueue:recordQueue withSeconds:.5];

needed to be

  int bufferByteSize = [self computeRecordBufferSize:&recordFormat inAudioQueue:recordQueue withSeconds:.5];

i the buffers were being calculated based on the wrong ASBD

the code inside here was all fine 

- (void)processConverter:(AudioConverterRef)inAudioConverter withPacketCount:(UInt32 *)ioDataPacketCount bufferList:(AudioBufferList *)ioData packetDescription:(AudioStreamPacketDescription **)outDataPacketDescription

so i wasn't making any mistakes there.

and i made some changes inside this code here (this is just for posterity if anyone else runs into this issue) 

- (void)handleAudioInQueue:(AudioQueueRef)inQueue
                withBuffer:(AudioQueueBufferRef)inBuffer
                    atTime:(const AudioTimeStamp *)inStartTime
               withPackets:(UInt32)inNumPackets
            andDescription:(const AudioStreamPacketDescription *)inPacketDesc
{
    //LOG_SELF_INFO;
    
    
    if (inNumPackets > 0) {
        
       // DDLogInfo(@"inBuffer->mAudioDataBytesCapacity: %u", inBuffer->mAudioDataBytesCapacity);
        
        outputBufferSize = 90 * 1024; // 32 KB is a good starting point
        packetsPerBuffer = outputBufferSize / 1;
        UInt8 *convertOutputBuffer = (UInt8 *)malloc(sizeof(UInt8) * outputBufferSize); // this changed
        
        AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = 1;
convertedData.mBuffers[0].mDataByteSize = inBuffer->mAudioDataByteSize; //this changed
convertedData.mBuffers[0].mData =  convertOutputBuffer;
        
        
        
        if (currentAudioData != NULL)
        {
            free(currentAudioData);
            currentAudioData = NULL;
        }
        
        currentAudioData = (void *)calloc(1, inBuffer->mAudioDataBytesCapacity);
        
        memcpy(currentAudioData, inBuffer->mAudioData, inBuffer->mAudioDataByteSize);
        currentAudioDataByteSize = inBuffer->mAudioDataByteSize;
        UInt32 ioOutputDataPackets = packetsPerBuffer;

        AudioConverterFillComplexBuffer(audioConverter,
                                         MyAudioConverterCallback,
                                         (__bridge void *)self,
                                         &ioOutputDataPackets,
                                         &convertedData,
                                        nil);
        
        // Enqueue on the output Queue!
        AudioQueueBufferRef outputBuffer;
        CheckError(AudioQueueAllocateBuffer(playerQueue, inBuffer->mAudioDataBytesCapacity, &outputBuffer), "Input callback failed to allocate new output buffer");
        
        //copy the input buffer to the output buffer
        
        memcpy(outputBuffer->mAudioData, convertedData.mBuffers[0].mData, convertedData.mBuffers[0].mDataByteSize);
        outputBuffer->mAudioDataByteSize = convertedData.mBuffers[0].mDataByteSize;
        

----

the only remaining problem i have left is the playthrough audio is choppy because the output queue isn't supporting the outFormat with the sampling rate of 8000. am i going to need to re-upsample to get the audio to play through properly?

My major gripe with AudioUnits is literally EVERY single piece of example code i find is VERY finicky. half the time when playthrough starts i get nothing but static at first and have to quit and relaunch a random number of times to actually get playthrough working. 

this includes ALL apple sample code, all the sample code from the Learning Core Audio book, the EZAudio project. every single CAPlaythrough AudioUnit based sample code i find out github. 

In addition the weeks i spent banging my head against the wall trying to get the audioqueues working i also spent digging through documentation and that core audio book trying to find ANY help on how to easily downsample and downmix audio. on this particular mailing list the ONLY tidbit i found was from 2006 and said something about using deprecated quicktime functionality. Im on a bit of a deadline with this project at work and dont honestly dont have the time to redo all of this with AudioUnits, and with the only major sticking point left being playing through this new audio (the recorded file i create is perfect- but wont be in the end product and is just there for debugging- so i cant play that back)

sorry to be so verbose, any additional help would be appreciated!






On Jan 7, 2014, at 7:43 PM, Dave Bender <email@hidden> wrote:

Anchorman,
  I have struggled with half working CoreAudio code as well. While my use case was different, I found the following problems with AudioQueues:

-Output queue would randomly stop playing sound (ie samples pushed to the queue would not make any sound)
-Halting an output queue would actually run callbacks for the input queue within the same stack, creating awful synchronization problems
-Sometimes the sound daemon would just stop working and I'd have to reset my iPod touch.

The only way I got things working was to eliminate the use of AudioQueues and use the AudioUnits and AUGraph functionality instead.
I suspect if you keep trying to fix your AudioQueue data you will just encounter other problems.

My advice: give up on AudioQueues. Go one level deeper into AudioUnits. Spend your time learning how to communicate between threads rather than
wasting your time with the queues. AudioUnitRender in the input callback will get your higher bit rate data and you can probably even downsample it
in that callback. Save the allocated output data in a shared array. Then in your output callback check if the array has any data and play it out.

The following github project has good reference code.
https://github.com/alexbw/novocaine/blob/master/Novocaine/Novocaine.m

-Dave


On Tue, Jan 7, 2014 at 12:30 PM, Ron Burgundy <email@hidden> wrote:
So i've been banging my head against the wall for a couple of weeks now to get this working and i've made next to no progress whatsoever.

A little background on the workflow im trying to establish, in conjunction with what i've attempt to do to accomplish it.

I want to pass audio through the input device to the output device while downsampling to a sample rate of 8000 mono channel audio in WAV format


The only thing that has even come close to working is using AudioQueues heavily based on this example https://github.com/MaxGabriel/AudioPlaythrough.git
however the C is all wrapped up on Objective-C methods to make life easier. The basic init method to get things kicked off does the following

  AudioStreamBasicDescription recordFormat;
        memset(&recordFormat, 0, sizeof(recordFormat));

(there is the ASBD of what im trying to accomplish)

        AudioStreamBasicDescription outFormat;
        memset(&recordFormat, 0, sizeof(outFormat));
        outFormat.mFormatID = kAudioFormatLinearPCM;
        outFormat.mSampleRate = 8000;
        outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        outFormat.mBitsPerChannel = 16;
        outFormat.mChannelsPerFrame = 1;
        outFormat.mFramesPerPacket = 1;
        outFormat.mBytesPerPacket = 2;
        outFormat.mBytesPerFrame = 2;
        outFormat.mReserved = 0;
        
     //pass the proper format in, we need to use an audio converter to downsample
        
        recordFormat.mSampleRate = 44100;
        recordFormat.mFormatID = kAudioFormatLinearPCM;
        recordFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        recordFormat.mBitsPerChannel = 16;
        recordFormat.mChannelsPerFrame = 2;
        recordFormat.mFramesPerPacket = 1;
        recordFormat.mBytesPerPacket = 4;
        recordFormat.mBytesPerFrame = 4;
        recordFormat.mReserved = 0;
        
        CheckError (AudioConverterNew(&recordFormat, &outFormat, &audioConverter),
                    "AudioConveterNew failed");
        
        //3.
        
        UInt32 propSize = sizeof(recordFormat);
        CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
                                          0,
                                          NULL,
                                          &propSize,
                                          &recordFormat), "AudioFormatGetProperty failed");
        
 
        CheckError(AudioQueueNewInput(&recordFormat, MyAQInputCallback, (__bridge void *)self, NULL, NULL, 0, &recordQueue), "AudioQueueNewInput failed");
    
        //5. This step might also be frivolous
        
        // Fills in ABSD a little more
        UInt32 size = sizeof(recordFormat);
        CheckError(AudioQueueGetProperty(recordQueue,
                                         kAudioConverterCurrentOutputStreamDescription,
                                         &recordFormat,
                                         &size), "Couldn't get queue's format");
        
        
        //6.
        
        int bufferByteSize = [self computeRecordBufferSize:&recordFormat inAudioQueue:recordQueue withSeconds:.5];
       
        //NSLog(@"%d",__LINE__);
        
        //7. Create and Enqueue buffers
        int bufferIndex;
        for (bufferIndex = 0;
             bufferIndex < kNumberRecordBuffers;
             ++bufferIndex) {
            AudioQueueBufferRef buffer;
            CheckError(AudioQueueAllocateBuffer(recordQueue,
                                                bufferByteSize,
                                                &buffer), "AudioQueueBufferRef failed");
            CheckError(AudioQueueEnqueueBuffer(recordQueue, buffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
        }
        
  
        CheckError(AudioQueueNewOutput(&recordFormat,
                                       MyAQOutputCallback,
                                       (__bridge void *)self, NULL, NULL, 0,
                                       &playerQueue), "AudioOutputNewQueue failed");
        
 
        UInt32 playBufferByteSize;
        CalculateBytesForPlaythrough(recordQueue, recordFormat, 0.1, &playBufferByteSize, &(numPacketsToRead));
        
        bool isFormatVBR = (recordFormat.mBytesPerPacket == 0
                            || recordFormat.mFramesPerPacket == 0);
        if (isFormatVBR) {
            NSLog(@"Not supporting VBR");
            packetDescs = (AudioStreamPacketDescription*) malloc(sizeof(AudioStreamPacketDescription) * numPacketsToRead);
        } else {
            packetDescs = NULL;
        }
    
        //start the queues up!
        
        CheckError(AudioQueueStart(playerQueue, NULL), "AudioQueueStart failed");
        CheckError(AudioQueueStart(recordQueue, NULL), "AudioQueueStart failed");


The C methods to get things passed through are pretty cut and dry

OSStatus MyAudioConverterCallback(AudioConverterRef inAudioConverter,
  UInt32 *ioDataPacketCount,
  AudioBufferList *ioData,
  AudioStreamPacketDescription **outDataPacketDescription,
  void *inUserData)
{
    
   CSServerSessionManager *myData = (__bridge CSServerSessionManager*)inUserData;
   
    [myData processConverter:inAudioConverter withPacketCount:ioDataPacketCount bufferList:ioData packetDescription:outDataPacketDescription];

    return 0;
}
static void MyAQInputCallback(void *inUserData,
                              AudioQueueRef inQueue,
                              AudioQueueBufferRef inBuffer,
                              const AudioTimeStamp *inStartTime,
                              UInt32 inNumPackets,
                              const AudioStreamPacketDescription *inPacketDesc)
{
    
    CSServerSessionManager *myData = (__bridge CSServerSessionManager*)inUserData;
   
    if (!myData.sessionActive) return;
    
    [myData handleAudioInQueue:inQueue withBuffer:inBuffer atTime:inStartTime withPackets:inNumPackets andDescription:inPacketDesc];
  
}

i KNOW im doing something wrong here, but i just dont know what. AudioConverterFillComplexBuffer could not be more convoluted and difficult to understand!!!


- (void)processConverter:(AudioConverterRef)inAudioConverter withPacketCount:(UInt32 *)ioDataPacketCount bufferList:(AudioBufferList *)ioData packetDescription:(AudioStreamPacketDescription **)outDataPacketDescription

{

    NSLog(@"iopacketcount: %u byteSize: %u", *ioDataPacketCount, currentAudioDataByteSize);
    // initialize in case of failure
    ioData->mBuffers[0].mData = NULL;
    ioData->mBuffers[0].mDataByteSize = 0;

ioData->mBuffers[0].mData = currentAudioData;
ioData->mBuffers[0].mDataByteSize = currentAudioDataByteSize;

}

currentAudioData (is just a void *currentAudioData in the header)

i know im doing somethign wrong here too, i just have no idea what :(

- (void)handleAudioInQueue:(AudioQueueRef)inQueue
                withBuffer:(AudioQueueBufferRef)inBuffer
                    atTime:(const AudioTimeStamp *)inStartTime
               withPackets:(UInt32)inNumPackets
            andDescription:(const AudioStreamPacketDescription *)inPacketDesc
{
    //LOG_SELF_INFO;
    
    
    if (inNumPackets > 0) {
        
        outputBufferSize = 23 * 1024; // 32 KB is a good starting point
        packetsPerBuffer = outputBufferSize / 2;
        UInt8 *convertOutputBuffer = (UInt8 *)malloc(sizeof(UInt8) * inBuffer->mAudioDataBytesCapacity); // CHRIS: not sizeof(UInt8*). check book text!
        
        AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = 1;
convertedData.mBuffers[0].mDataByteSize = inBuffer->mAudioDataBytesCapacity;
convertedData.mBuffers[0].mData =  convertOutputBuffer;
       
        if (currentAudioData != NULL)
        {
            free(currentAudioData);
            currentAudioData = NULL;
        }
        
        currentAudioData = (void *)calloc(1, inBuffer->mAudioDataBytesCapacity);
        
        memcpy(currentAudioData, inBuffer->mAudioData, inBuffer->mAudioDataByteSize);
        currentAudioDataByteSize = inBuffer->mAudioDataByteSize;
        UInt32 ioOutputDataPackets = packetsPerBuffer;
        CheckError(AudioConverterFillComplexBuffer(audioConverter,
                                                   MyAudioConverterCallback,
                                                   (__bridge void *)self,
                                                   &inNumPackets,
                                                   &convertedData,
                                                   nil), "fill complex buffer error");
        
        
        // Enqueue on the output Queue!
        AudioQueueBufferRef outputBuffer;
        CheckError(AudioQueueAllocateBuffer(playerQueue, convertedData.mBuffers[0].mDataByteSize, &outputBuffer), "Input callback failed to allocate new output buffer");
        
        //copy the input buffer to the output buffer
        
        memcpy(outputBuffer->mAudioData, convertedData.mBuffers[0].mData, convertedData.mBuffers[0].mDataByteSize);
        outputBuffer->mAudioDataByteSize = convertedData.mBuffers[0].mDataByteSize;
        
        
        //if we dont create a release pool here, things leak like crazy.
        
        @autoreleasepool{
            
            //wrap the bytes into NSData so we can process and send it.
            
            NSData *currentData = [NSData dataWithBytes:convertedData.mBuffers[0].mData length:convertedData.mBuffers[0].mDataByteSize];
            
            [outputHandle writeData: currentData]; //output handle is an nsfilehandle created in the init method
            
          
            //NSLog(@"data: %@",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]);
        }
        // Assuming LPCM so no packet descriptions
        CheckError(AudioQueueEnqueueBuffer(playerQueue, outputBuffer, 0, NULL), "Enqueing the buffer in input callback failed");
        recordPacket += inNumPackets;
    }
    
    
    if (self.sessionActive) {
        CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
    }
}


Sorry for the massive amount of code, i just want to be as thorough as possible when explaining the issue. I know im feeding the buffers incorrectly in AudioConverterFillComplexBuffer, i just cant wrap my head around how to use them with AudioQueues. I've literally thrawled through YEARS of the mailing list thread to no avail. i've tried incorporating the EZAudio project to use AudioUnits instead but it only played through in mono for some reason, the volume was reduced and i couldn't figure out how to get downsampling or downmixing working there either.

So long story short, when the converter is in there it doesnt play through the outputqueue (not as concerned about that till i can get downsampling / downmixing working properly anyways) the output file that i create starts off well enough but then starts repeating and skipping samples.

PLEASE HELP!!


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden




 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Using AudioQueues with AudioConverterFillComplexBuffer.... (From: Ron Burgundy <email@hidden>)
 >Re: Using AudioQueues with AudioConverterFillComplexBuffer.... (From: Dave Bender <email@hidden>)

  • Prev by Date: Re: scheduling a block from a render callback
  • Next by Date: RE: scheduling a block from a render callback
  • Previous by thread: Re: Using AudioQueues with AudioConverterFillComplexBuffer....
  • Next by thread: Re: Using AudioQueues with AudioConverterFillComplexBuffer....
  • Index(es):
    • Date
    • Thread