Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- Subject: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
- From: Brian Willoughby <email@hidden>
- Date: Fri, 25 Nov 2011 13:05:32 -0800
One thing you may be overlooking is that the CoreAudio clock is very
accurate. If you can ensure that your video rendering is tied to the
high-accuracy CoreAudio clock then there will be no disadvantage to
processing audio and video in separate paths. CoreAudio time is also
host time, but on a much finer scale than typical OS timing. It's
actually quite easy to make sure that your audio is precisely timed.
I think that a much harder problem will be to get the video timing up
to par, especially since it seems like you might not have access to
the video code.
Brian Willoughby
Sound Consulting
On Nov 25, 2011, at 07:33, Heinrich Fink wrote:
As a sidenote: My job is to add an audio engine to a TV
broadcasting application with an existing video processing pipeline.
My concern is to basically decide between the following two
approaches:
A] Couple audio processing and output with the existing video
processing pipline.
B] Use a separate path for audio processing and output.
Both scenarios should use audio unit graphs as the underlying audio
rendering path.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden