• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender


  • Subject: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • From: Brian Willoughby <email@hidden>
  • Date: Wed, 23 Nov 2011 16:48:29 -0800

Heinrich,

You seem to have made a couple of assumptions that are not true:

A) You assume that a 2 second pre-roll must always be significantly faster than 2 seconds.

B) You assume that a non-real-time system can be turned into a real- time system with a mere 2-second pre-roll.

C) You previously assumed that offline render mode will make pre-roll run faster.


On the first note, it's reasonable to assume that if you have a real- time-capable system then the 2-second pre-roll will take 2 seconds in real time when run in online mode. It's possible that the pre-roll might be faster than 2 seconds, but there is no real guarantee of that. If you set the AUs to OfflineRender mode, then it's entirely possible and even probable that they might take longer than 2 seconds. So, my conclusion is that you have set an impossible goal to expect less than 2 seconds of elapsed time for your computers to pre-roll 2 seconds of audio. Why not just instruct your DJs to start the pre-roll process more than 2 seconds before air time? Then you should be guaranteed that the pre-roll will complete in time. Once the pre-roll is loaded, the computer can sit in that ready state until playback. Whether you use OfflineRender mode or not can be determined separately, but I do not think it is reasonable to expect every computer system to be able to pre-roll in a manner that both saves your operators time by running faster than real-time and also is available to provide a margin of error for jitter and other short- term delays during playback.


On the second note, Paul has already explained that pre-roll cannot turn a non-real-time system into a real-time system. To be fair, if your songs are an average of 3 minutes long, and you pre-roll 3 minutes before every track instead of 2 seconds, then pre-roll could potentially turn a non-real-time system into a real-time system. However, under the conditions where you need this, a non-real-time system will take longer than 3 minutes to fill a 3-minute pre-roll buffer - by definition of non-real-time - and if you choose to run in OfflineRender mode then it will most likely take even longer than it would otherwise. If you think about it, it's basically hopeless to expect a mere 2 seconds of pre-roll to make up for a non-real-time system that will constantly be running slow. The 2 seconds of buffering acts only to safeguard against very short-term deficits. Your average playback speed must be real-time or better, otherwise the 2-second pre-roll would be quickly depleted and then you would hear an audible glitch. Even a system capable of faster-than-real- time playback might briefly experience a hitch in data flow, and that's where the pre-roll helps. But every time a part of the 2- second pre-roll is used to make up for samples that aren't ready yet, there is the requirement that the system run faster than real-time long enough to re-fill the pre-roll buffer, otherwise things eventually fall apart.


I believe that if you adjust your expectations for reality, then you might have an easier time implementing your code.


Brian Willoughby
Sound Consulting


On Nov 23, 2011, at 02:47, Heinrich Fink wrote:
I am curious how many audio units actually support the OfflineRender property at all (in the realm of audio effects). As you said, preroll does not require better quality than realtime rendering, but the exact same audio quality as online rendering with faster rendering speeds (unlike bouncing). If a particular audio graph setup would not be able to satisfy the 2 sec. preroll phase in a reasonable amount of time, I would then handle this similarly to a general audio overload. Most of our users in broadcasting are unlikely to use a heavy effects chain anyway (unlike users of an audio sequencer).

So just ignoring the OfflineRender property and to hope for the best seems to a possible but potentially unsafe approach. When implementing it this way, we would have to test the possible permutations of the audio graph beforehand and only expose a particular set of audio unit effects to our users which we have confirmed to work without glitches during and after preroll phase. This of course is both limiting the user’s experience and also increasing the workload for testing and maintenance on our side.


On Nov 23, 2011, at 10:19 , Stefan Gretscher wrote:
In a music production software for recording studios, you can get away with realtime render and occasional dropped buffers during preroll - the affected tracks will most likely just start playing a little later than the other tracks. However when working on broadcast software, dropping buffers when starting audio is no viable option, and in this use case I don't see any way around switching to offline rendering for the pre-roll if you want to it faster than realtime.

This is a good point. In my understanding, the whole idea of executing a preroll phase first, instead of directly using a realtime context is to have a safety buffer available that compensates for hiccups in the processing chain. This should avoid dropped frames which might cause artifacts or even worse: out-of- sync playback. In other words, the point of having a preroll buffer in first place, is that you DON’T have a system that is capable to operate in real time, unlike AudioUnits which are mostly fully functional in a real-time context.


Now, if using preroll scenario for my software design introduces less determined behavior and even more potential glitches than using realtime only, it makes me think if it weren’t better to just avoid the preroll phase at all. I think the root of this problem is that I am trying to connect a system that is primarily designed to be used in a realtime context (audio units) with an adapter (the preroll buffer) that was primarily designed to be used by non- realtime systems in order to enable realtime playout. From this perspective it seems unnecessarily complicated and is probably the reason why my use case just doesn’t really fit with AudioUnits (or at least isn’t very well supported).

While native SDKs of most video broadcasting cards that we use require a preroll phase, these cards usually provide a CoreAudio hardware device as well (at least for AJA and BlackMagic cards). In order to ensure proper sync between video frames and audio we would have preferred to use the SDKs directly - preroll already couples audio chunks with video frames. But I guess if they implemented their CoreAudio drivers properly, i.e. reported hardware latencies are correct, we should be able to separate audio playout from video and yet ensure correct timing. After all, the CoreAudio hardware should provide us with enough information to determine accurately when audio is going to on the wire.

So I have a feeling that spending more effort on implementing accurate realtime scheduling for an audio path that is decoupled from video and its preroll phase - i.e. using the CoreAudio device directly which wouldn't require any faster-than-realtime rendering with unclear behavior - might be the preferable path to follow. This is my first project in broadcasting, so I’m not sure if this is a reasonable change of path, or if I just started digging my own grave :) Any comments on this alternative solution?

With the decoupled approach, there would of course still be the option of going completely offline and to feed the preroll buffers only. Realtime monitoring (e.g. on a separate aux bus) would then be much harder of course. Also, any additional time required by audio units to properly render would then have be covered by the preroll buffer.


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Heinrich Fink <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Paul Davis <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Heinrich Fink <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Brian Willoughby <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Heinrich Fink <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Stefan Gretscher <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Brian Willoughby <email@hidden>)
 >Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender (From: Heinrich Fink <email@hidden>)

  • Prev by Date: MIDIPortDisconnectSource() hangs on 10.7
  • Next by Date: Re: Convolution Audio Unit how to?
  • Previous by thread: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Next by thread: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Index(es):
    • Date
    • Thread