Specifying buffers for an AudioUnit inputProc
Specifying buffers for an AudioUnit inputProc
- Subject: Specifying buffers for an AudioUnit inputProc
- From: James Turner <email@hidden>
- Date: Fri, 1 Oct 2004 11:45:24 +0100
(Apologies for what I expect are newbie questions, but I am foundering
in the documentation)
I am working on a program that needs to do cross-platform audio, and
currently on the CoreAudio implementation for OS-X. The
platform-independent portions of the code produce audio streams (in
various formats) by different methods (synthesis, disk-files and so
on). The API I have chosen for these platform independent producers of
audio data is actually similar to AudioUnits : they implement a
'render' method which writes to a supplied buffer, and the
platform-specific driver code uses this to pull data out on demand.
For the CoreAudio implementation, I am using the DefaultOutputUnit
(based on the example code in the SDK). The example uses an InputProc
to fill the AudioBufferList with sample data.
I have various candidate ways of moving my sample data around, but I'm
not sure which approaches work:
1 - In the InputProc for each DefaultOutputUnit, call the render method
on my platform-independent data source. This is efficient because the
source could write directly to the AudioBufferList I'm supplied with in
the inputProc, but I don't know in what thread context the inputProc is
called, and what kind of activity / latency requirements exist. Eg, is
doing file I/O possible or safe? How about memory allocation?
2 - Run the platform-independent sources in a helper thread (where they
can block and do I/O to their heart's content). Have them render into
some temporary buffers, and then in the DefaultOutputUnit's InputProc,
memcpy() those buffers into the supplied AudioBufferList. This seems
like it will definitely work, but introduces another copying and
buffering step, and more need for synchronisation of the intermediate
buffers.
3 - Same as option two, but instead of memcpy()ing the buffers produced
by the worker thread, just pass them directly to the DefaultOutputUnit,
replacing the AudioBufferList it supplies. In this case, I would need
some way to know when the unit is 'done' with a buffer so I can re-use
it in the worker thread.
I've noticed the 'SetExternalBuffer' property on AudioUnits, but also
that the RenderProc flags can include options about how the supplied
buffers are to be used, so I'm really unsure what approaches are wise.
If someone could lead me out of this maze, that would be much
appreciated!
Many thanks,
James Turner
--
That which does not kill me has poor aim
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden