Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- Subject: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- From: Brian Willoughby <email@hidden>
- Date: Tue, 20 Dec 2011 23:33:11 -0800
On Dec 20, 2011, at 03:20, Heinrich Fink wrote:
On Nov 25, 2011, at 22:05 , Brian Willoughby wrote:
One thing you may be overlooking is that the CoreAudio clock is
very accurate. If you can ensure that your video rendering is
tied to the high-accuracy CoreAudio clock then there will be no
disadvantage to processing audio and video in separate paths.
CoreAudio time is also host time, but on a much finer scale than
typical OS timing. It's actually quite easy to make sure that
your audio is precisely timed. I think that a much harder problem
will be to get the video timing up to par, especially since it
seems like you might not have access to the video code.
I have recently spend some time understanding CoreAudio’s timing
capabilities better (see my other post related to that). I can see
how to accurately schedule audio using AudioDeviceTranslateTime
(e.g. anchoring media start times in host time and convert to
sample time). This should also take things like actual vs. nominal
sampling rate of the device into account.
However, I was wondering if it is customary for hosts to use the
current rateScalar also during playback (e.g. by always using a
Varispeed unit in the render path). Let’s say my host is scheduling
playback of a clip that is 4 hrs long. Without considering the
rateScalar, the clip would be about 0.65 secs early for a device
that is running on an average of 44098 hz instead of 44100 hz.
In your experience, would you say that using a Varispeed by default
in order to compensate for the actual vs. nominal clock could pose
any problems regarding audio quality?
As an engineer (and consumer), I would recommend that you play the
audio without varispeed and sync the video to the audio. This would
require using the rateScalar to convert audio sample time to offset
time, and then you should be able to keep the video frames in sync
with the audio.
Moving a few video frames by 0.65 towards the end of a 4-hour movie
is going to cause far less distortion of the movie than altering the
audio, and probably nobody will ever notice that the 4-hour movie
ends 0.65 seconds later than it should have.
Besides, avoiding varispeed will use less CPU, not to mention avoid
altering the audio quality.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden