Re: Playback timing
Re: Playback timing
- Subject: Re: Playback timing
- From: William Stewart <email@hidden>
- Date: Mon, 19 Dec 2005 16:59:22 -0800
On 19/12/2005, at 4:37 PM, Craig Hopson wrote:
Our application has some custom generators. At playback time we
build a MusicSequence with several custom tracks into which we put
MIDIMetaEvents (could easily be MusicEventUserData, of course). We
then want to play the sequence and support frame or sample accurate
playback - meaning that sometimes an event's output must start in
the middle of a render buffer. Some of the generators take a bit
of time to process the input samples and produce output samples and
the number of output samples is not always equal to the number of
input samples. The time for this processing varies from unit to
unit and event to event, so overall latency is also an issue.
Further, we need to update the UI during playback so the user knows
what is happening. UI updating must take lower priority than
playback, of course.
It seems there are two ways to handle this playback: (Tell me if I
have anything wrong or have missed some critical bit...)
1. Use a MusicPlayer
- create the AUGraph
- create the MusicSequence
- install a callback via MusicSequenceSetUserCallback
- the callback will dispatch events to the proper generator
- the callback will notify the UI to update if appropriate
- create a MusicPlayer and install the sequence
- pre-roll the MusicPlayer
- start the MusicPlayer
- adjust the headphones
** How is latency handled in this model?
You need to know where the collection points are - for instance,
multiple inputs to a mixer, then a mixer output - those inputs are a
collection point. Then, you have to understand what the latency is
for each collection point, and then you add latency to the other
collection points. You have to have a good notion of your signal flow
and how latency accumulates through each stage of this flow, then be
able to adjust it. In Tiger we ship an AUSampleDelay that is
specifically designed to be used to make these kinds of compensations.
How do we support the notion of preroll in our units?
I'm not sure what you mean by this - but if you are talking about say
preloading media samples, you have to do that yourself when you get
to the preroll time of the players.
Do all of the generators need to support some basic AU interface?
That's a question you need to answer yourself - it depends on how you
are going to integrate this into the rest of the processing you are
doing.
How is inter-buffer timing handled? (An event begins in the
middle of one render buffer and ends in another.)
the first part of that first buffer is silent.
2. Handle playback ourselves
- create the AUGraph
In the graph, install AUs which receive events based on the
current MusicSequence time stamp.
These AUs will have a (threaded) ring buffer (or similar
mechanism) with which they can manage output of samples. They will
hold ring data until the output time stamp matches the time stamp
associated with their current event, providing sample-accurate
timing. Otherwise they will fill the render buffer(s) with zeros.
If you do this, you should set the "is silent" render flag, so that
AUs that look at that flag can ignore your silence...
Bill
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden