Re: old mtcoreaudio code doesn't work now
Re: old mtcoreaudio code doesn't work now
- Subject: Re: old mtcoreaudio code doesn't work now
- From: dudley ackerman <email@hidden>
- Date: Tue, 8 Jan 2008 15:32:24 -0800
On Jan 7, 2008, at 11:31 PM, Michael Thornburgh wrote:
On Jan 7, 2008, at 11:25 AM, dudley ackerman wrote:
On Jan 6, 2008, at 4:40 PM, Michael Thornburgh wrote:
On Jan 6, 2008, at 4:08 PM, dudley ackerman wrote:
On Jan 6, 2008, at 12:41 PM, Michael Thornburgh wrote:
<snip>
the buffer sizes are probably also too small. they need to be
big enough to hold *at least* one IOProc/IOTarget callback
buffer's worth of samples, or you'll over- or under-flow the
buffer during the input or output side's action. you want the
buffers to be small to keep latency down, but IOProc dispatch
jitter will cause over- or under-flow unless there's some slop.
plus you have to be reading out samples fast enough to keep the
input from overflowing, or feed samples in fast enough to keep
the output from overflowing, if either end isn't real-time
(like, in the record to or play back from a buffer or file or
something case).
hmm. so you don't think the code that computes the buffer size is
good?
ok - i cleaned that all up so only 1 converter is created on
record and 1 on playback.
and i cleaned up the names, so buffers and converters for record
are named 'in' and 'out' for playback.
given that you're recording to (and later playing from) a non-
realtime buffer, i'd suggest making the MTConversionBuffers large-
ish, like a second or so, to give plenty of time for the non-
realtime consumer (record) or producer (playout) long enough to
get their work done in the presence of realtime work also
happening (the actual audio threads).
ok. now i am recording, but it is giving me too much data -- over
100k/sec.
so, this is what i have for my recording converter:
Float64 sr = [self sampleRate]; // 8000
Float64 nsr = [inputDevice nominalSampleRate]; // 44100
Float64 inScale = sr/nsr; // 0.18140589569160998
unsigned sbs = [inputDevice
deviceMaxVariableBufferSizeInFrames]; // 512
unsigned inBufferSize = ceil(inScale * sbs *
SR_ERROR_ALLOWANCE); // 94
UInt32 dbz = [inputDevice
deviceMaxVariableBufferSizeInFrames]; // 512
if (inBuffer) MTAudioBufferListDispose(inBuffer);
inBuffer = MTAudioBufferListNew(1, inBufferSize, NO);
[inConverter release];
inConverter = [[MTConversionBuffer alloc]
initWithSourceSampleRate: nsr
channels: [inputDevice channelsForDirection:
kMTCoreAudioDeviceRecordDirection]
bufferFrames: inBufferSize
destinationSampleRate: sr
channels: 1
bufferFrames: ceil ( dbz * SR_ERROR_ALLOWANCE )
minimumBufferSeconds: 0
];
so, what would you suggest i change those values to?
first of all, inBufferSize is too small -- if [inputDevice
deviceMaxVariableBufferSizeInFrames] is 512, then inBufferSize
*must* be at least that big, or you'll risk not having enough space
in the MTConversionBuffer to hold even one IOProc's worth of
samples. the computation you're doing for inBufferSize is probably
what you'd actually want to do for dbz (output bufferFrames). this
is probably working at all for you because currently dbz is 512, so
output bufferFrames represents 512/8000 *seconds* (64ms). that
means the MTConversionBuffer is actually no shorter than 64ms, or
2822 frames in input side units, which is big enough for one input
IOProc's buffer in spite of what you told it.
secondly, 8000 samples per second, with one channel, 4 bytes per
sample (Float32), is 32,000 bytes/sec or 256,000 bits/second. how
much data are you actually getting out?
finally, although i have no idea what you're using inBuffer for,
it's almost certainly too small to be of much use, being only 1
channel wide and 94 frames long.
-mike
is it possible that we used to be able specify that we wanted data as
16 bit frames and we can't do that in the latest version of mtcoreaudio?
we used this setup for the app side specification for source of
output and destination of input:
AudioStreamBasicDescription shtoomDescription;
shtoomDescription.mSampleRate = [self sampleRate];
shtoomDescription.mFormatID = kAudioFormatLinearPCM;
shtoomDescription.mFormatFlags = kLinearPCMFormatFlagIsPacked |
kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsBigEndian
| kAudioFormatFlagIsNonInterleaved;
shtoomDescription.mBytesPerFrame = sizeof(SInt16);
shtoomDescription.mFramesPerPacket = 1;
shtoomDescription.mBytesPerPacket = sizeof(SInt16);
shtoomDescription.mChannelsPerFrame = 1;
shtoomDescription.mBitsPerChannel = 16;
shtoomDescription.mReserved = 0;
do i now have to expect Float32 frames and have to convert that to
16bit integer frames in my own code.
i'm sure that is my problem -- i just had to stare at the code long
enough to see it.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden