Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- Subject: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- From: Heinrich Fink <email@hidden>
- Date: Mon, 21 Nov 2011 17:49:58 +0100
Hi,
I am currently designing an audio engine for a broadcast application based on the AudioUnit and AUGraph APIs. It's basically file-based audio input, a bit of routing plus some filtering effects. We might have to use a different output path than the usual (preferred) way of using the AUHal. This has been discussed previously here: http://lists.apple.com/archives/coreaudio-api/2011/Nov/msg00077.html - and is not the issue of my question. To quickly sum up the previous discussion: we get a callback from a broadcasting card's SDK to fill buffers with audio data (e.g. Blackmagic UltraStudio 3D). In order to make this work, we will not use the AUHal approach, but rather call the audio graph by ourselves.
The crux of the matter is the following:
Before playback starts, we have to fill hardware buffers as quickly as possible during a "preroll" phase. This should be faster than realtime, it is basically offline rendering of the AUGraph for about 2 seconds. After the desired watermark level of the hardware buffers has been reached, the callback semantics change back to realtime behavior (similar to being called by the AUHal).
According to the documentation and related discussions on this mailing list, I understand that a render context like "preroll phase" would require the property kAudioUnitProperty_OfflineRender to be supported by each audio unit in the graph, and to be set to "true". This works under the assumption that if kAudioUnitProperty_OfflineRender is not supported by an audio unit and is not set to "true", then I must not call AudioUnitRender faster than real time, i.e. correct behavior of the audio unit would not be guaranteed anymore.
Could someone please confirm that the above assumption is correct?
It seems that besides of kAudioUnitSubType_DLSSynth almost none of the audio units provided by Apple support this property. On a side note: Logic DOES bounce faster than realtime, even if active audio unit effects do not support kAudioUnitProperty_OfflineRender (such as AUDelay). So could it be the case that many host implementations just ignore setting this property and assume that AU effects would be fine being called faster than realtime? It is understandable that units such as AUFilePlayer would not be able to support this since they have their own timing strategy for file input streaming (as discussed many times on this list).
To sum up my questions:
] Is it true that AudioUnits should only be used within a realtime context, unless kAudioUnitProperty_OfflineRender is supported and can be set to "true".
] Is it possible to render faster than realtime, at least for 1-2 seconds, even if kAudioUnitProperty_OfflineRender is NOT supported (e.g. with AUMatrixMixer, AUTimePitch, etc.. they all do not support this property).
] How are host implementers and effect developers dealing with these issues?
I searched through previous discussion on this mailing list, and the discussion that seems to be directly related to this is http://lists.apple.com/archives/coreaudio-api/2011/Mar/msg00131.html . Even though this thread seems to have had a lot of attention, there weren't any answers on this issue.
I would appreciate any advice from developers who have had some experience with the issues mentioned above.
Thanks in advance,
best regards,
Heinrich Fink
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden