Re: AU MIDI events scheduled beyond next rendering slice
Re: AU MIDI events scheduled beyond next rendering slice
- Subject: Re: AU MIDI events scheduled beyond next rendering slice
- From: Bill Stewart <email@hidden>
- Date: Thu, 7 Aug 2003 11:10:05 -0700
Urs
On Thursday, August 7, 2003, at 01:13 AM, Urs Heckmann wrote:
Am Donnerstag, 07.08.03, um 02:11 Uhr (Europe/Berlin) schrieb Jeremy
Sagan:
Urs,
I think you are misunderstanding me. I will try to clarify.
Yeah. Sorry.
My personal opinion, from a host perspective, is that I agree. This
should not happen. It is the hosts job to do scheduling.
Hosts can't queue into a running render process...
<snip>
I am not sure what you are responding to here. I was just trying to
write what Bill eventually wrote more eloquently "But (and the
documentation is I think clear on this), the frame offsets are always
described as being offsets into the NEXT slice of audio that is to be
rendered (and for ramped parameters, they are rescheduled for each
slice),..." Thus the AU only needs to keep track of one buffer full
of data for MIDI events.
Ooops, oh well, 2nd language problem, I guess. I already read Marc's
"scheduling of midi" as sort of the stuff I was working on during the
last 3 days. - Of course, the host has to do scheduling properly.
However - and I'm responding/disagreeing to the consensus about the
"next buffer only" thing - I think some future knowledge is fine as
well. - If there was a limit to the offset, like 2x ProcessBufferSize,
some things could be done easier.
Imagine I would want NoteOns at least 128 samples in advance, for some
future vision. The 128 samples could be used to more economically
steal voices without clicks, to easier prepare an "inner queue", and a
whole lotta nice stuff as well.
For that, I would usually set the plugs latency to 128 samples, so I
simply add 128 to each timestamp and process without latency. That
would hopefully do the trick, but it would screw up timing for live
play.
I would feel perfectly well if we had a
EventsCome_PreferredlyWithFutureVisionIntoNextBuffer that could be set
to a preferred look ahead. Still, timestamps could be zero for live
play, but for play back, they'd always come some samples ahead of
time, hence with timestamps >= n < (Buffersize + n). Could save a lot
of cpu cycles for Music Devices.
I think you are trying to solve a hard problem, but in the wrong place.
We do *not* expect that AU's are given scheduling information beyond
the next buffer.
There are various speculative approaches however that an AU could take
(particularly a synth) to try to even out its work-load
One that we've toyed with is say having a synth take a "controllable"
minimal CPU usage - so that when rendering any particular slice, when
it is done with its current buffer, if it has got time left, it could
go and do some work to calculate notes for the next buffer - staying
within this lower-bound limit of course).. In many cases, this would
give a synth a much more even CPU usage, and in many cases this
additional work would be used and not discarded (you could even think
of algorithms that look at the parameter changes that are coming in for
the current buffer, so it figures out which notes are being tweaked and
then presumes that those notes will be tweaked again, etc...
This doesn't violate the real-time usage construct of the AU-host, and
if this idea were interesting (I can certainly see in a performance
situation where this has some advantages), this model could be formerly
published with a property to allow this to be controllable by a host
(say kAudioUnitProperty_MinimumCPULoad)
A while ago, someone else was talking about threading their rendering
to solve this problem as well using similar types of speculative
algorithms.
In a sequencer context they actually do their work ahead of the time
that their data is going to be played. So I presume that the host is
able to absorb some of these spikes of usage anyway, as long as they
get their data done in time for playback.. If this isn't tweakable in
the apps, to allow for spikey sequences, maybe it should be?
In this case (which is fairly normal actually), because the host is
rendering ahead of time, the strategy you are proposing doesn't really
buy you anything, as this is certainly something that a host can deal
with.
This could get really tweaky! (for eg, the host could use the CPU Load
property to tell you from buffer to buffer what your constraints are)...
However, by saying, ok I'll deal with 2 buffers ahead, you really
aren't solving anything - what if the big crunch is 3 buffers ahead?
The host can see this, can react to this, but you can't.
Ultimately I think the host is in a far better position to do this
scheduling - The host knows this, and can adjust your scheduling
appropriately (say it sees lots of events coming up, so it goes and
schedules your work earlier, so you have a chance to do it "in time")...
From my point of view, by preserving the real-time context of an AU, we
provide a clear semantic of how a host and an AU interact. Its a clear
and precise contract. As only the host knows the full context of an
AU's usage - I think this has to be the responsibility of the host to
deal intelligently and flexibly with how it makes use of that knowledge.
And if you are in a real-time performance situation, well, you only
ever know what you know now...
Bill
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.