Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- Subject: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- From: Ross Bencina <email@hidden>
- Date: Wed, 21 Dec 2011 19:00:13 +1100
On 20/12/2011 10:20 PM, Heinrich Fink wrote:
On Nov 25, 2011, at 22:05 , Brian Willoughby wrote:
One thing you may be overlooking is that the CoreAudio clock is
very accurate.
True, but I wonder if the rate of the clock used for
mach_absolute_time() has greater accuracy than the one used for sample
clock?
When CoreAudio reports that a device has an actual sample rate of
44098Hz, this is presumably relative to the system monotonic clock --
but I see no reason to assume that the system clock is more accurate
than an audio clock. The deviation from nominal could just as easily be
from a fast system clock as a slow audio clock. Can anyone clarify this?
If you can ensure that your video rendering is tied to the
high-accuracy CoreAudio clock then there will be no disadvantage to
processing audio and video in separate paths.
Assuming that you are not already trying to synchronise to a house clock
source.
I have recently spend some time understanding CoreAudio’s timing
capabilities better (see my other post related to that). I can see how
to accurately schedule audio using AudioDeviceTranslateTime (e.g.
anchoring media start times in host time and convert to sample time).
This should also take things like actual vs. nominal sampling rate of
the device into account.
However, I was wondering if it is customary for hosts to use the
current rateScalar also during playback (e.g. by always using a
Varispeed unit in the render path).
Logic Audio provides every imaginable option I think:
http://documentation.apple.com/en/logicpro/usermanual/index.html#chapter=43§ion=3&tasks=true
I don't think you're being clear enough about what kind of host you're
writing to discuss "customary" behavior. Is this a video editing tool,
an movie player, a DAW..?
Let’s say my host is scheduling
playback of a clip that is 4 hrs long. Without considering the
rateScalar, the clip would be about 0.65 secs early for a device that is
running on an average of 44098 hz instead of 44100 hz.
As Brian suggested, ideally you want to clock the video off the audio so
that this drift doesnt happen.
Clearly you don't want to be out by 650ms at the end of the film. You
don't want your software to be out by 40ms for precision A/V sync
ideally less since there can be other delays in the chain that will
desyncronise the vision and sound (e.g. display latency).
In your experience, would you say that using a Varispeed by default
in order to compensate for the actual vs. nominal clock could pose any
problems regarding audio quality?
It depends what the expectations of the user is. Some people expect
bit-accurate audio playback and what you're describing will
significantly munge things.
Without further info, in your context, using Varispeed just because you
can't clock the video off the audio sound like a hack, and not a very
pleasant one.. but perhaps I don't understand the requirements.
Ross.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden