• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Interpreting output of AudioConverter
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Interpreting output of AudioConverter


  • Subject: Interpreting output of AudioConverter
  • From: Heath Raftery <email@hidden>
  • Date: Wed, 25 May 2005 02:11:07 +1000

This is essentially a repost of an earlier issue, just fleshed out and then summarised as the problem became better specified. I still can't find any relevant documentation on this, and am very surprised that using this API is so clouded in secrecy.

My AudioConverter appears to be operational now, but I have no idea how I'm supposed to interpret the results. Can someone please explain how the data comes out, the packet/frame/byte relationship, or where I might get this information from? Specifically, I'm going from:

2 ch, 44100 Hz, 'lpcm' (0x0000002B) 32-bit big-endian float, deinterleaved

to

1 ch, 512 Hz, 'Qclp' (0x00000000) 0 bits/channel, 35 bytes/packet, 160 frames/packet, 0 bytes/frame

I seem to be getting 800 packets (== 3200 bytes/channel) from the microphone in the first format, into fBufferList without dramas. I then submit it to the converter with this:

AudioConverterFillComplexBuffer(fInputConverter,
                                supplyDataForConversionProc,
                                self,
                                &packetsRequested,
                                fConvertedBufferList,
                                NULL);

At first I set packetsRequested to the full 800. My supplyDataForConvertsionProc would then simply do this:

for(i=0; i<fBufferList->mNumberBuffers; ++i)
{
ioData->mBuffers[i].mNumberChannels = fBufferList->mBuffers [i].mNumberChannels;
ioData->mBuffers[i].mData = fBufferList->mBuffers[i].mData;
ioData->mBuffers[i].mDataByteSize = fBufferList->mBuffers [i].mDataByteSize;
}
*ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize / frameSize;


Providing the full 3200 bytes * 2 channels that's in the input buffer. This resulted in packetsRequested to remain equal to 800, but since the output format has 35 bytes/packet, the buffer size blows out to 28000 bytes. Hardly a compression scheme! I tried instead setting packetsRequested to 5 (since 800 frames at 160 frames/packet is 5 packets) but packetsRequested comes back as 0, and no one seems happy.

How do I provide the right amount of data?
How do I interpret the different values for bytes/packet, frames/ packet and bytes/frame?
What format should I expect the data to return in?


Heath


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Prev by Date: AudioFile and
  • Next by Date: New audio developer
  • Previous by thread: Re: AudioFile and m4a
  • Next by thread: New audio developer
  • Index(es):
    • Date
    • Thread