• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AudioConverterFillComplexBuffer
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AudioConverterFillComplexBuffer


  • Subject: Re: AudioConverterFillComplexBuffer
  • From: Herbie Robinson <email@hidden>
  • Date: Tue, 26 Apr 2005 16:58:22 -0400

At 1:53 PM -0400 4/26/05, Craig Bakalian wrote:
Hi,
I must convert some audio data. I am on page 50-51 of the CoreAudio pdf. I am not sure I understand the process of the AudioConverterRef. I understand that I set up a converter instance with two AudioStreamBasicDescriptions, in and out. However, on the AudioConverterFillComplexBuffer(converter, inputProcPtr, userData, &ioOutputDataPacketSize, &bufferList, NULL) I don't understand how the &bufferList is working. Am I to create an instance of an AudioBufferList and fill in the mBuffers[i]->mData with the data I need converted? Also, the userData, must I set up constants that define what the new fields of the mData will be set to in the inputProc function (like on page 51)?
Or, is there any further documentation, or examples of a working AudioConverterRef out there?

The AudioConverterRef is an opaque structure. You make it and pass it to AudioConverterFillComplexBuffer. Beyond that, you don't see it. The input and output buffers (which you allocate) must be compatible with what you specified in the AudioStreamBasicDescriptions you gave.


The buffer list parameter to AudioConverterFillComplexBuffer is for the output. You fill in all the fields and allocate a buffer (or buffers) and AudioConverterFillComplexBuffer fills in the data.

Also, it appears to me that kLinearPCMFormatFlagIsNonInterleaved is not implemented for output buffers (at least not in the cases that I tried).

The input data is supplied to AudioConverterFillComplexBuffer by the inputProc you give it. Here are two input procs that I wrote: The first uses the AudioFile interface and the second uses Quicktime. Note that neither of these supports the general case: They only support a single buffer and don't support packet descriptions. I wasn't entirely clear on how the packet descriptions work (or more specifically how to tell when they are needed). They are obviously not needed for PCM, as one might expect. they there were also not needed for MP3 (at least not when coming from Quicktime). Also, it is possible that some of the formats that might come out of Quicktime might product de-interleaved data....

static OSStatus
AudioFileInputDataProc( AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,


AudioStreamPacketDescription** outDataPacketDescription,
void* inUserData)
{
AudioProcessingMailbox *ap = (AudioProcessingMailbox *) inUserData;
UInt32 numBytes;
UInt32 numPackets = *ioNumberDataPackets;
OSStatus stat = 0;


	if (ioData->mNumberBuffers != 1)
		return kAudioConverterErr_InvalidInputSize;

	if (numPackets > maxSamples)
		numPackets = maxSamples;

stat = AudioFileReadPackets (ap->mbFileID, false, &numBytes, nil, ap->mbPosition,

&numPackets, ap->mbReadBuf.mBuffers[0].mData);

	*ioNumberDataPackets = numPackets;

ioData->mBuffers[0].mNumberChannels = ap->mbReadBuf.mBuffers[0].mNumberChannels;
ioData->mBuffers[0].mDataByteSize = (numBytes < ap->mbReadBuf.mBuffers[0].mDataByteSize)
? numBytes
: ap->mbReadBuf.mBuffers[0].mDataByteSize;
ioData->mBuffers[0].mData = ap->mbReadBuf.mBuffers[0].mData;


	if (outDataPacketDescription)
		*outDataPacketDescription = nil;

	if (!stat)
		ap->mbPosition += numPackets;

	return stat;
}

static OSStatus
QTFileInputDataProc( AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,


AudioStreamPacketDescription** outDataPacketDescription,
void* inUserData)
{
AudioProcessingMailbox *mbx = (AudioProcessingMailbox *) inUserData;
long numPackets = *ioNumberDataPackets;
long size;
TimeValue sampleTime;
TimeValue durationPerSample;
long sampleDescriptionIndex;
long numSamples;
short sampleFlags;
OSErr err = 0;


	if (ioData->mNumberBuffers != 1)
		return kAudioConverterErr_InvalidInputSize;

err = GetMediaSample (
mbx->mbTrack->media,
mbx->mbBufHandle,
numPackets * mbx->mbDataFormat.mBytesPerPacket,
&size,
mbx->mbPosition,
&sampleTime,
&durationPerSample,
mbx->mbSampleDescriptionH,
&sampleDescriptionIndex,
maxSamples,
&numSamples,
&sampleFlags );


ioData->mBuffers[0].mNumberChannels = mbx->mbDataFormat.mChannelsPerFrame;
ioData->mBuffers[0].mDataByteSize = size;
ioData->mBuffers[0].mData = *(mbx->mbBufHandle);


	if (outDataPacketDescription)
		*outDataPacketDescription = nil;

	if (!err)
		mbx->mbPosition += numSamples;

	return err;
}

--
-*****************************************
**  http://www.curbside-recording.com/  **
******************************************
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: AudioConverterFillComplexBuffer
      • From: Doug Wyatt <email@hidden>
References: 
 >AudioConverterFillComplexBuffer (From: Craig Bakalian <email@hidden>)

  • Prev by Date: Re: What AudioFileWritePackets() caching means?
  • Next by Date: Re: What AudioFileWritePackets() caching means?
  • Previous by thread: AudioConverterFillComplexBuffer
  • Next by thread: Re: AudioConverterFillComplexBuffer
  • Index(es):
    • Date
    • Thread