Re: AudioUnitScheduleParameters
Re: AudioUnitScheduleParameters
- Subject: Re: AudioUnitScheduleParameters
- From: William Stewart <email@hidden>
- Date: Mon, 11 Apr 2005 12:35:01 -0700
On 11/04/2005, at 12:08 PM, Ev wrote:
On Apr 11, 2005, at 1:41 PM, William Stewart wrote:
Nope - in fact you may lose scheduling events for ramping if you
schedule stuff in Post Render...
OK, I looked into the RenderNotify call, and it seems as though it
will call before the *audio unit* renders, which is assumedly after
the *callback render*, so it might be a no-brainer, my 'play
location' will already be current.
Nope - it looks like this:
Caller calls AudioUnitRender:
AU - calls Render Notify - pre flag set
AU - goes into render logic
- AU "get input" - calls any RenderInput callbacks (or pulls
on its connections to upstream AUs)
- AU process input
AU - calls Render Notify - post flag set
The MusicSequence API's also support scheduling ramps - all you
need is a track with:
MusicTrackSetProperty with
kSequenceTrackProperty_AutomatedParameters and a UInt32 set to 1
(tells the sequence engine to treat this track in this way)..
Then it looks at parameter events in that track as pairs:
The first param event - start of the first ramp
The second param event - end of the first ramp
The third param event - start of the second ramp
The fourth param event - end of the second ramp, etc...
The two pairs must of course have the same paramID - if you need
overlapping ramps for different params, then you will need to put
them in different tracks.
And I'm assuming these ramps are linear, is that the case?
Yes, the ramp information sent to the AU is linear.... However, an AU
can decide to interpret that ramp in any way appropriate (and might
even provide UI to apply curves, etc)
That's the tough issue with curves; how much resolution between
steps or ramps do you give it before it becomes unwieldy in terms
of events vs sonic stair-stepping? Ramping at least makes that
decision a lot easier - and one graph buffer size is actually a
pretty good window in terms of timing - i'd imagine that if you
wrote two MusicSequence track parameter events that were ramps (as
you demonstrate above), they eventually get broken down to ramps at
the buffer size level anyway... I'm probably just coding the same
thing that the MusicSequence API is calculating.
Thanks for your help, Bill - you've given me a lot to go on. This
should work fantastically.
You can apply ramps across buffers - there's no time limitation on
the API...
What happens, is that the client reschedules the ramp with a progress
indicator, each buffer the AU is rendering = the progress indicator
is the startBufferOffset in the ParamEvent structure you pass to the
API. Aside from this allowing you to schedule ramps across multiple
buffers, it also allows you to "start" a ramp part-way through it
(for instance if you seek the time line to a point that would
intersect a ramp). The Sequence API's take care of both of these
situations for you as part of its sequencing implementation.
Bill
Ev
Technical Knowledge Officer
Head Programmer/Designer
Audiofile Engineering
http://www.audiofile-engineering.com/
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden