Re: Is is possible to have an AUGraph output to memory?
Re: Is is possible to have an AUGraph output to memory?
- Subject: Re: Is is possible to have an AUGraph output to memory?
- From: Brian Willoughby <email@hidden>
- Date: Fri, 11 May 2012 14:09:10 -0700
Dave, you're focused a CoreAudio decoder, but really there are some
basic AudioUnit rules that you should learn because they affect all
AudioUnits and hosts.
For one, an AudioUnit never reads a file. It is the job of the
AUHost application to read files. The AUGraph deals with audio
sample buffers in memory exclusively, because that is what AudioUnits
are designed to handle on input and output. The only exception is
the AUAudioFilePlayer, which will read a file and send the audio
samples to its output. In an AUHost like Logic, all file I/O is
handled by the application, not the AUGraph (assuming Logic uses an
AUGraph to host AudioUnits).
On that note, you probably could use AUAudioFilePlayer in your
AUGraph to feed your decoder.
By the way, the definition of a decoder is that it is an AudioUnit
that generates LPCM, so that should answer part of your question. An
encoder takes LPCM on input. Encoders and decoders are the only
AudioUnits which work with non-audio data (the data-compressed
version of the audio is not strictly legal, which is why it must be
converted to/from LPCM). AUConverter is very similar.
Your pseudocode seems to be missing the API calls that would link the
output of the decoder to the input of the default output unit.
Because an AUGraph can be an arbitrary tree, there's not way for
these connections to happen automatically. You must explicitly patch
the virtual audio cables.
There are examples for most of this, but I suppose they could be out
of date by now.
Brian Willoughby
Sound Consulting
On May 11, 2012, at 11:01, Dave Camp wrote:
I'm updating a set of core audio decoders that have not been
touched in a long time. For reasons unknown to me, the decoder was
written to take in the raw file data, parse the file data into
packets, and finally decode the packets into LPCM. Since the
decoder shows up as a decoder and not a file component, Core Audio
can't actually use it to read the native file format it's written
for. Presumably the host app would have loaded the file manually
and handed the file data to the component via the render callback.
Unfortunately, no one can tell me why it was written this way or
how it was tested or exercised, so I'm starting from scratch here.
My first goal was to verify that the existing code worked as
intended. I pulled together some code that loaded the file into an
NSData, setup an AUGraph with the decoder and the default output
unit. However, when I call AUGraphInitialize(), the decoder
crashes. As a test, I instead added the Apple ima4 decoder and it
still crashes, so I assume I'm doing something wrong in my test
code and need to take some smaller steps before moving on. I
realize that the ima4 decoder wouldn't be able to parse my data, I
just wanted to see if it got any further than my decoder, which it
does not.
My AUGraph looks like the following in pseudocode:
NewAUGrpah()
AUGraphAddNode(kAudioUnitType_Output,
kAudioSubUnitType_DefaultOutput, kAudioUnitManufacturer_Apple)
AUGraphAddNode(kAudioDecoderComponentType, 'fooo', 'baar') //
loading my decoder component
AUGraphOpen()
set the render proc for the decoder
set the output format of the decoder to LPCM, 44.1, 16 bpc, native
signed packed
AUGraphInitialize() // crash in ::SetCurrentInputFormat
(AudioStreamBasicDescription const&) of whatever decoder I'm trying
to use
While the AU call that triggers the crash can vary, it always
crashes in ::SetCurrentInputFormat(AudioStreamBasicDescription
const&) of whatever decoder I'm trying to use. Setting a breakpoint
in my decoder, it looks like the input and output format pointers
it's extracting from the component parameters are garbage.
I assume I'm missing some fundamental concept here and not setting
up the components correctly, but I'm not getting any traction on
what that would be.
Dave
On May 11, 2012, at 12:09 AM, Heinrich Fink wrote:
You are right, an audio graph has to have an output unit set in
order to start the graph. In your case, you could add the
kAudioUnitSubType_GenericOutput output unit, the simplest of all
output units. This audio unit is best suited for offline
processing of the audio graph where you don’t want to output audio
to an actual device. You will have to issue render calls manually,
though, since no device is requesting samples for you. Also note
that the kAudioUnitSubType_GenericOutput should be able to deal
with simple PCM audio format conversions for you, i.e. depending
on your needs, you might not have to use a converter unit
additionally.
However, with the above said, I am not sure if using an audio
graph is really the best tool for what you are trying to do,
running the graph in “offline" mode can be very tricky. You would
have to share more details about your use-case here.
best regards,
Heinrich Fink
On May 11, 2012, at 02:15 , Dave Camp wrote:
I'm working on a test harness for some Mac OS Core Audio codecs
and file components we are working on. In a nutshell, I'd like to
be able to run my test data through an AUGraph, but instead of
outputting to a device, I'd like to save it to a buffer for
offline analysis.
Is there a way to do this? I've setup a simple graph with a
single format converter set to convert between two flavors of
LPCM, set the render callback on the input of the converter, but
when I start the graph I get a kAUGraphErr_NodeNotFound,
presumably because there is no output node.
Will I need to create my own custom output component (something
like the RAW sample file component) to do this? Or is there a way
to associate a callback with the output of the converter that
will be called with the converted samples?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden