Re: Completely not getting it with AudioBufferList and CASpectralProcessor
Re: Completely not getting it with AudioBufferList and CASpectralProcessor
- Subject: Re: Completely not getting it with AudioBufferList and CASpectralProcessor
- From: William Stewart <email@hidden>
- Date: Tue, 2 Dec 2008 10:43:33 -0800
For a buffer list that is interleaved there is one buffer and the
samples for each channel are adjacent to each other:
LRLR... in the case of stereo
For a deinterleaved buffer list there is as many buffers as there are
channels, and each buffer has just one channel of data:
LLLL...
RRRR....
So, a simple way of dealing with buffer lists regardless of how they
are laid out is to use the data members of the ABL struct:
To memset the entire contents of an ABL to 0:
AudioBuffer *buf = ioData->mBuffers;
for (UInt32 i = ioData->mNumberBuffers; i--; ++buf)
memset((Byte *)buf->mData, 0, buf->mDataByteSize);
So, if you use this as a starting point, I think that would help
Audio Units use a canonical format of de-interleaved audio data - we
wanted to use one standard layout so it would be trivial to pass audio
data from one audio unit to the next. So, all effects will only
generally deal with de-interleaved data.
Have a look at AUOutputBL in Public Utility - that is robust for
creating an ABL that will represent different "layouts" of linear PCM
in an audio stream basic description
Bill
On Dec 1, 2008, at 7:30 PM, David Preece wrote:
Hi,
I'm trying to feed samples from a file into CASpectralProcessor and
having no luck. I started by writing a simple "extract the samples"
app using the ExtAudioFile API. That declared a client data format
of 44.1k, LinearPCM, floating point, 2 channels per frame, 32 bits
per channel and 1 frame per packet. By setting this stream
description on both the source file and a wave file created through
ExtAudioFileCreateNew I was able to transcode from one file to the
next by looping round an ExtAudioFileRead, ExtAudioFileWrite pair.
For that I created a single AudioBuffer using an AudioBufferList of
one buffer with two channels, 1024 frames at once and a malloc of
8192 bytes.
I'm now trying to use this same code to feed a CASpectralProcessor
via the ProcessForwards method call. However, I'm getting a crash in
CASpectralProcessor::CopyInput on this loop:
for (UInt32 i=0; i<mNumChannels; ++i) {
memcpy(mChannels[i].mInputBuf + mInputPos, inInput-
>mBuffers[i].mData, numBytes);
}
Where inInput is an AudioBufferList and since mNumChannels==2 it
would seem to be written with the assumption that I've passed two
separate AudioBuffers, one for each channel.
So, have I been inadvertently copying from/to using this interleaved
audio I keep hearing about but not really understanding? Do I get
non-interleaved audio by creating two separate audio buffers (under
the auspices of just one AudioBufferList) and setting
mChannelsPerFrame=1? Would this likely fix my problem? Does the
CASpectralProcessor only work with non-interleaved audio?
I *will* get there :)
TIA,
Dave
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden