Re: achieving very low latency
Re: achieving very low latency
- Subject: Re: achieving very low latency
- From: William Stewart <email@hidden>
- Date: Tue, 10 Jul 2012 12:08:23 -0700
On Jul 10, 2012, at 7:03 AM, AI Developer wrote:
Please Correct me if I'm totally mistaken here, but isn't this
fundamentally the same as what a lot of people want to do when using
a MIDI Keyboard with a Software Synthesizer?
Yes.
A software synth sits in the render chain (the IOProc), so that it can respond to a MIDI event. Every time it is asked to produce data (AudioUnitRender), it either provides more rendered audio in response to the MIDI events that have been sent to it, or output's silence (memset's the buffer), which it has to do if it has no data to render. By being in the render chain, it has to produce valid data of course.
Bill
Thanks.
Devendra.
On 7/4/12 7:19 PM, Daphne Ippolito wrote:
Hello,
I am writing software for a psychology lab that needs to play
sounds with sub-millisecond latency for one of its experiments.
The original software the lab used runs on OS 8/9 and makes use
of the old SndManager system. It loads all sounds into memory at
the beginning of the experiment, and is later able to start the
sounds with sub-millisecond latency. I have not been able to
replicate with Core Audio the low-latency levels achieved with
the old system. Is it possible to play sounds with
sub-millisecond or even <5-10 ms latency in OS X? Is the
"load all sounds into memory" approach still a good one?
Thank you,
Daphne Ippolito
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden