• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AU question (Part 1)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AU question (Part 1)


  • Subject: Re: AU question (Part 1)
  • From: Bill Stewart <email@hidden>
  • Date: Fri, 3 Oct 2003 11:03:24 -0700

We've described an API (and an example implementation) of an offline AU type in Panther. We will be talking to hosts about supporting this type of AU as well, so you might want to have a look at that and see if this meets your needs...

Bill

Here's Part 1 of a doc that provides some details (I'll split this up into two parts so it gets past the list-gate-keepers)
/*
These are the additions for the <AudioUnit/...> headers added
for the offline render unit
*/


// In AUComponent.h
kAudioUnitType_OfflineEffect = FOUR_CHAR_CODE('auol')

// offline render flags
kAudioOfflineUnitRenderAction_Preflight = (1 << 5)
kAudioOfflineUnitRenderAction_Render = (1 << 6)
kAudioOfflineUnitRenderAction_Complete = (1 << 7)

// offline errors
kAudioUnitErr_InvalidOfflineRender = -10848


/*
Latency and Tail handling:
An offline unit should not return latent samples (ie. zero samples at the start
because of latency in its processing).. It should also not expect the caller to
handle any tail characteristics of its processing... The caller of an offline unit
does NOT know how to deal with either of these properties, so the AU should:
(1) Absorb any latent samples at the beginning of its processing
(2) Continue to produce output unit the tail of its processing has completed

An Offline unit can feasibly require any number of input samples for a given
number of output samples it is asked to produce at any time. In order to simplify
the handling of buffering, the offline unit is restricted to asking its input for
no more samples than the MaxFramesPerSlice property allows. It if requires more
input, then it can just pull again before returning the output for that slice.
*/

/*
Channelization and other general rendering considerations

Channelization:
An offline render unit's channel valence handling operates on
the same assumptions as an effect unit.

If the units does not support the kAudioUnitProperty_SupportedNumChannels
property, then it is assumed that the channel numbers on input and output
must match.



Basic sketch of how this works:

Host - sets input samples count to process property
Call AudioUnitRender:
(1) kAudioOfflineUnitRenderAction_Preflight
until kAudioUnitStatus_OfflineRenderComplete is set in the ioRenderActionFlags
- no data is returned in the audio buffer list
(2) kAudioOfflineUnitRenderAction_Render
until kAudioUnitStatus_OfflineRenderComplete is set in the ioRenderActionFlags
- processed data is returned in the audio buffer list

When an offline render unit is called (in either mode) it will
call for input data. The host uses the sample count field in
the render callback to determine from which sample (and how many samples)
should be returned by that input callback. This may not be contiguous
from one call to the next, and the host may call the input callback more than
once for a given slice of output samples if there is random access to the input
samples required based on the settings/operational logic of the offline renderer

When an offline render has both:
- completely read the input data it is required to process
- AND has allowed for any tailing of the processing to occur (like a reverb tail)
the AU will set the kAudioUnitStatus_OfflineRenderComplete bit in the ioRenderActionFlags.

When the host sees this flag set after calling AudioUnitRender it can determine that is has reached
the end of the processing of the AU. At that stage it needs to check the size of the
sample data (for the kAudioOfflineUnitRenderAction_Render case) returned in the
audio buffer list's:
mBuffer[0].mDataByteSize field

This field describes how many bytes are contained in the attached data field (mData) and
the AU will reset that size field to describe how many valid bytes of sample data
there are in this last pull.


For eg:
(lets say you are going to get 1050 samples back...)
myTimeStamp.mSampleTime = 0;
AudioUnitRender (myOfflineUnit, myTimeStamp,... ,512, //this is the num frames to produce...
//first call AUR will return noErr and 512 samples

// second call
myTimeStamp.mSampleTime = 512;
AudioUnitRender (myOfflineUnit, myTimeStamp,... ,512, //this is the num frames to produce...
// second call AUR will return noErr and 512 samples

// third call
myTimeStamp.mSampleTime = 1024;
AudioUnitRender (myOfflineUnit, myTimeStamp,... ,512, //this is the num frames to produce...
// third call, AU will set ioRenderActionFlags with the kAudioUnitStatus_OfflineRenderComplete
// check myBufferList.mBuffer[0].mDataByteSize
// this tells you how many samples are contained in the returned last buffer
UInt32 numFrames = myBufferList.mBuffer[0].mDataByteSize / sizeof(Float32);
// so there are 26 valid samples in this last buffer that the host has to pull out

ASSUMPTIONS:
- this assumes that the sizeof(Float32) is the size of the sample
- this will typically be the case
- this also assumes that the unit is being pulled for data in the de-interleaved
format where the buffer list is constructed for numChannels AudioBuffers

The Host as you can see, is responsible for incrementing the sample count of the time
stamp it passes to AudioUnitRender and to pull this data out in an incremental and
contiguous manner for the output side.

As this is offline processing, only the sample count field need be valid in the AudioTimeStamp
(certainly the host time field will NOT be valid)... Other time indicators, like SMPTE time,
can also be valid if the AU supports this..

In the Preflight stage, the AU may return the kAudioUnitStatus_OfflineRenderComplete bit set
on the first pull if there is not an analysis stage that the AU requires or is has already done this

Any non-zero result codes from AudioUnitRender should be treated appropriately
*/


On Friday, October 3, 2003, at 04:41 AM, Wolfgang Schneider wrote:

Hi all,

I need to know a thing: is there a way for an AudioUnits
plugin to determine whether the host is doing offline
processing or not??? For example Logic uses offline
processing for it's freeze track feature - and my plugin
needs to know if that's the case....


thanks in advance,
Wolfgang
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


-- mailto:email@hidden
tel: +1 408 974 4056

________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
References: 
 >AU question (From: "Wolfgang Schneider" <email@hidden>)

  • Prev by Date: Re: AU N to M channels guideline (and Validation tool)
  • Next by Date: Re: where is the validation tool?
  • Previous by thread: AU question
  • Next by thread: Re: Re: AU N to M channels guideline (and Validation tool)
  • Index(es):
    • Date
    • Thread