Re: old mtcoreaudio code doesn't work now
Re: old mtcoreaudio code doesn't work now
- Subject: Re: old mtcoreaudio code doesn't work now
- From: dudley ackerman <email@hidden>
- Date: Wed, 9 Jan 2008 11:27:17 -0800
On Jan 8, 2008, at 11:57 PM, Michael Thornburgh wrote:
On Jan 8, 2008, at 4:08 PM, dudley ackerman wrote:
On Jan 8, 2008, at 3:32 PM, dudley ackerman wrote:
<snip>
is it possible that we used to be able specify that we wanted data
as 16 bit frames and we can't do that in the latest version of
mtcoreaudio?
we used this setup for the app side specification for source of
output and destination of input:
AudioStreamBasicDescription shtoomDescription;
shtoomDescription.mSampleRate = [self sampleRate];
shtoomDescription.mFormatID = kAudioFormatLinearPCM;
shtoomDescription.mFormatFlags = kLinearPCMFormatFlagIsPacked |
kLinearPCMFormatFlagIsSignedInteger |
kLinearPCMFormatFlagIsBigEndian | kAudioFormatFlagIsNonInterleaved;
shtoomDescription.mBytesPerFrame = sizeof(SInt16);
shtoomDescription.mFramesPerPacket = 1;
shtoomDescription.mBytesPerPacket = sizeof(SInt16);
shtoomDescription.mChannelsPerFrame = 1;
shtoomDescription.mBitsPerChannel = 16;
shtoomDescription.mReserved = 0;
do i now have to expect Float32 frames and have to convert that to
16bit integer frames in my own code.
i'm sure that is my problem -- i just had to stare at the code
long enough to see it.
ok, so maybe i am not so sure.
the old code used the mBytesPerFrame where the new code just uses
'2'.
so, what is the meaning of this snippet in MTConversionBuffer.m:
srcFrames = ((srcFrames * 2) / srcChans) / sizeof(Float32);
dstFrames = ((dstFrames * 2) / dstChans) / sizeof(Float32);
what is Float32 in the conversion and what is 16 bit?
the above doesn't actually appear in my code (either the current
release or previous one). furthermore, MTCoreAudio has always been
*only* Float32 everywhere that deals with samples.
and while i am here, how about this snippet in the same method in
the old code:
BOOL srcInterleaved = !(srcDescription.mFormatFlags &
kAudioFormatFlagIsNonInterleaved);
BOOL dstInterleaved = !(dstDescription.mFormatFlags &
kAudioFormatFlagIsNonInterleaved);
audioBuffer = [[MTAudioBuffer alloc]
initWithCapacityFrames:totalBufferFrames channels:srcChans
interleaved:srcInterleaved];
now, in the latest version of mtcoreaudio i see:
conversionChannels = MIN ( srcChans, dstChans );
audioBuffer = [[MTAudioBuffer alloc]
initWithCapacityFrames:totalBufferFrames
channels:conversionChannels];
doesn't interleaving matter, anymore?
this also doesn't appear in either the current or previous release
of MTCoreAudio. however, interleaving doesn't matter* -- the right
thing will happen when AudioBufferLists are copied from or to the
buffers.
if you're getting static, trying to use integer samples instead of
Float32 would definitely be a cause.
* it doesn't matter (except possibly for performance when there are
mismatches) in the code i actually released. however, i have no
idea how the code you're using, as modified by others in ways
unknown to me, will behave.
-mike
please, mike, don't be angry -- i am trying my best to get the code
working. i did make a change to MTConversionBuffer.m trying to get the
params to match up with what we had in tiger with an older version of
your code. i have reverted the code to the original state.
i see now that what i need is to get 16bit data out, but i don't know
how to do that within MTConversionBuffer. is it possible? if so, how?
or do i have to do that conversion in my own code?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden