Re: Polling Callback, Threads [Attn: Michael Thornburgh]
Re: Polling Callback, Threads [Attn: Michael Thornburgh]
- Subject: Re: Polling Callback, Threads [Attn: Michael Thornburgh]
- From: Michael Thornburgh <email@hidden>
- Date: Wed, 10 Dec 2003 16:16:49 -0800
from your email in november (both on- and off-list), it sounds like
you're using objective-c (presumably that means cocoa) for your app,
and are using the HAL C apis to deal with the audio stuff (IOProcs,
default devices, etc).
doing really heavy lifting in your IOProc can be rude, since the high
priority of the IOProc's thread can keep almost everything else on the
system from running while you're doing your DSP. and assuming that
your app listens to incoming audio, does some DSP, and displays the
results of the DSP in real-time, it would be more desirable to skip
some DSP and displaying if the system started to bog down. so i think
a separate thread for your DSP is a good idea. if you combine that
with a small buffer (no larger than the maximum latency you want
between when sound is recorded and when the DSP results are displayed)
to get samples from the IOProc to the DSP and a discard-on-buffer-full
policy, you're set.
you can create a new thread with +[NSThread
detachNewThreadSelector:toTarget:withObject:]. you should also read
the programming topic "Multithreading", linked from NSThread's
reference page.
to implement the buffer, everyone on this list has their own favorite
thread-safe buffer scheme and opinions on locking during the IOProc.
personally, it doesn't bother me too much to take brief locks in the
IOProc, even though i know it's potentially dangerous. for a simple
app, it shouldn't be a problem. i would use my (not exactly properly
named) MTCircularQueue class, included in MTCoreAudio.framework. that
class doesn't depend on anything else in the framework, and its header
file and implementation can just be copied into your project and used
without bringing that whole framework along. that's the one class in
there that doesn't have any documentation, though! :) however, it's
used as the foundation of MTConversionBuffer in AudioMonitor.
here's the general idea, if you were using MTCircularQueue.
unsigned lengthInFrames = (however you determine how long the buffer
should be in frames);
unsigned channels = (however you determine the channels for your app);
Float32 *myTempBuf = malloc(sizeof(Float32) * channels *
lengthInFrames);
MTCircularQueue * myQueue = [[MTCircularQueue alloc]
initWithLength:(lengthInFrames * channels * sizeof(Float32))];
...
myIOProc(...)
{
unsigned framesThisBuffer =
(inInputData->mBuffers[0].mDataByteSize/inInputData-
>mBuffers[0].mNumberChannels)/sizeof(Float32);
interleaveStreamsToOneBuffer(inInputData, myTempBuf,
framesThisBuffer); // if you want. see my audio_copy() for how you
might do this. you could also decide you only care about the first
stream and just use that buffer
[myQueue writeBytesWithoutBlockingFrom:myTempBuf
length:(framesThisBuffer * channels * sizeof(Float32))];
}
...
- dspThreadMain:(id)anObject
{
unsigned dspBufLenFrames = (XXX);
unsigned dspBufLenBytes = dspBufLenFrames * channels *
sizeof(Float32);
Float32 * dspBuf = malloc(dspBufLenBytes);
...
while(1)
{
unsigned bytesRead = [myQueue readBytesTo:dspBuf
length:dspBufLenBytes];
unsigned framesRead = (bytesRead / channels) / sizeof(Float32);
[dspObj doMyDSPAndSignalMainThreadForUIStuffBuffer:dspBuf
frames:framesRead];
}
}
the -writeBytesWithoutBlockingFrom:length: method will discard data
once the buffer is full, and return immediately. the
-readBytesTo:length: method will wait until that many bytes are
available in the buffer before returning. this is implemented with
NSConditionLocks, which you could do on your own too.
cocoa UI elements should only be updated from the main thread. you can
use -[NSObject performSelectorOnMainThread:withObject:waitUntilDone:]
to communicate between your DSP thread and the main thread.
there are nearly as many opinions on how this stuff should be done as
there are members of this list. :)
-mike
ps. i probably wouldn't have gone on this long if i hadn't been called
out on the subject line.
On Dec 10, 2003, at 2:24 PM, Daniel Todd Currie wrote:
So I've created this really cool tuner application that uses CoreAudio
to simply get audio from the default audio device. I have naively
acquired my buffer data by placing a message to my analyzer class
inside the IOProc callback function. Needless to say, system
performance is quite poor while my app is running. ;)
I've been told that i will need to set up a new thread to communicate
with the CoreAudio thread and to grab the buffers as they are filled.
I don't quite know how to do this, but I would really hate to see this
project go to waste (it is a freeware project, if that might encourage
your charity).
Is there perhaps a framework available somewhere that would make this
problem easier to deal with? I would have expected that there would
be some way to streamline the process of acquiring audio data in a
desirable manner, since every CoreAudio programmer must do it. Any
other tips would be greatly appreciated; a brief outline of how I
should go about getting my buffer data would be invaluable. Thanks in
advance...
Daniel Currie (crossing fingers)
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.