Re: latency of MusicDeviceMIDIEvent and sampler device on iOS
Re: latency of MusicDeviceMIDIEvent and sampler device on iOS
- Subject: Re: latency of MusicDeviceMIDIEvent and sampler device on iOS
- From: Ross Bencina <email@hidden>
- Date: Wed, 19 Mar 2014 12:27:40 +1100
Hi Hamish,
On 18/03/2014 11:46 AM, Hamish Moffatt wrote:
> We're not trying to sync to audio though, just get MIDI events out
> accurately. In my example I'm trying to play a note every 120ms (eighth
> notes at 240 beats per minute). Pretty often the audio is ~20ms late.
> Looking into it a bit further I suspect this is one whole 1024 sample
> buffer (23ms at 44.1kHz). It catches up for the next event.
>
> How would I use the audio clock as reference anyway? Using
> AudioUnitAddRenderNotify?
If you're scheduling MIDI from outside the callback then there is always
the risk that you will "miss the boat" on the next callback if you try
to schedule immediately at time X.
For what it's worth, here's how I would do it:
Set up a lock-free queue that is written to by your scheduler, and
drained at the audio callback. The queue contains timestamped MIDI
events (probably timestamped using the system clock, since that's the
easiest one to access outside the audio callback).
You will need to schedule events in advance. If you want sample-accurate
scheduling, you'll need to schedule at least one buffer period in
advance (probably one buffer period plus some margin). Or equivalently,
introduce at least one buffer period worth of delay into the audio
callback's timestamp event scheduling.
At the start of each audio callback you dequeue all events and put them
into a priority queue ordered by timestamp (priority queue only used by
audio callback). Then you process all events in the priority queue that
fall within the current buffer (by translating their timestamps to
sample offsets within the buffer and calling MusicDeviceMIDIEvent). This
will give you sample accurate scheduling. You have to give some thought
to timestamps offsets to get this to work correctly without adding
latency or introducing the kind of jitter you're seeing right now.
Core audio has routines to give you the output time of the current
callback buffer in terms of system timestamps. You can use these in
combination with sample rate information to convert system timestamps
into sample timestamps. Alternatively, the periodicity of CA callbacks
is quite good, you might be able to get away with just taking a system
timestamp at the start of each audio callback and using that as your
time reference.
The above method is the basis for ensuring jitter-free MIDI playout for
any scheduling performed outside the audio callback, including for
example, playing incoming real-time MIDI using a synth in an audio
callback. It will work irrespective of the audio buffer period.
As a side note: if you can do all your scheduling inside the callback,
using the accumulated audio sample count as your time reference then
life is much simpler. I appreciate that that isn't always possible.
Cheers,
Ross.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden