Re: Offline vs. realtime processing?
Re: Offline vs. realtime processing?
- Subject: Re: Offline vs. realtime processing?
- From: Brian Willoughby <email@hidden>
- Date: Thu, 21 Nov 2002 00:32:22 -0800
[ Is it possible for an audiounit to determine if it is being used
[ for realtime rendering vs. offline (i.e. mixdown to file)? This
[ can be useful if for example you don't have enough processing
[ resources to do all your fancy DSP in realtime (but you have
[ enough to make it sound "pretty close"). In this case you could
[ do your "better" processing if you're mixing down to a file where
[ the realtime constraint is removed. This would probably be a user
[ pref. Anyway, VST has a way to determine this; I was wondering
[ if there's something similar for AU's.
This is an interesting idea, but I see two angles to it.
On the one hand, why limit the quality of processing in real time when faster
processors are coming out all the time? It would seem difficult to determine
what cannot be done in realtime when some processors are faster than others,
and some users will have more AUs in the graph than others. You might as well
offer a suite of AUs on a continuum where each has higher quality but higher
CPU usage.
On the other hand, it sure would be nice if the hosting app had the ability to
indicate non-real-time bouncing to a file, and basically swap in a higher
quality AU with the exact same settings as the real-time AU. It sure isn't
user friendly to require a different graph for both real-time and
non-real-time, but I cannot think of any way to guarantee that the full ability
of faster CPUs will be taken advantage of.
Brian Willoughby
Sound Consulting
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.