Re: Changing the latency from within the render callback.
Re: Changing the latency from within the render callback.
- Subject: Re: Changing the latency from within the render callback.
- From: Philippe Wicker <email@hidden>
- Date: Fri, 27 Feb 2004 22:35:34 +0100
On Feb 27, 2004, at 5:06 PM, Marc Poirier wrote:
Yes, it is possible.
I had the intention to take the buffer size because I thought that this
value was not likely to change (unless the user changes the HW IO
buffer size) and because this value would give the lowest latency.
There may be however a better solution if I can split the requested
buffer into smaller parts. For instance, if my render callback is
requested to render 512 frames, I could call twice the render of the
connected AU with 256 frames. This would allow me to set my latency to
a predefined value of 256 frames.
Do you know - or anyone else - if it is legal to ask for several
smaller buffers on input (as long as the requested size is returned on
output)?
Cheers
Philippe
Would it be possible to do your own buffering based on the
MaxFramesPerSlice value, so that your latency is always fixed at that
and
not dependent on the size of each slice? This may result in higher
latency in some cases, but I would be inclined to think that higher but
constant latency is better than lower but variable latency...
Marc
On Fri, 27 Feb 2004, Philippe Wicker wrote:
Some times ago I published on my web site the sources of a set of
tools
enabling audio communication between processes. II share my spare time
between this project and another more "usual" one for which there is
no
such latency problem. Recently there has been a thread on this list
called "multithreaded mixer" which showed me that the synchronization
method I'm currently using between a source audio thread in one
process
and a destination audio thread in another process is not compliant
with
real time thread scheduling constraints (the destination thread is
blocked waiting for the source thread to complete its rendering, which
is the "number one bad thing to do" according Jeff Moore's answers).
So
I have to change my synchronization technique. In some configuration
(e.g. if an AU subgraph belonging to process A is inserted in an AU
chain in a host running in process B, then the audio is sent through a
"send plug" and received back via a "return plug") I need the audio be
rendered by the part of the AU chain B located ahead of the "send"
port
before I can send it to the external AU subgraph A and get it back via
the "return" port to end the render. Because I cannot block the audio
thread B, one solution is to use a ping-pong shared buffer between
both
threads. Hence the latency of one buffer.
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
Philippe Wicker
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.