Re: Has nobody used CoreAudio Clock?
Re: Has nobody used CoreAudio Clock?
- Subject: Re: Has nobody used CoreAudio Clock?
- From: Brian Willoughby <email@hidden>
- Date: Tue, 03 May 2011 15:22:49 -0700
On May 3, 2011, at 14:49, Paul Davis wrote:
On Tue, May 3, 2011 at 5:32 PM, Brian Willoughby
<email@hidden> wrote:
What is particularly important to take note of is that some MIDI
interfaces
have their own clocking. CoreMIDI is able, via the proper
CoreMIDI driver,
to synchronize with the MIDI hardware clock to deliver MIDI data
in advance
of when it should be transmitted.
In my experience, these designs are really awesome when used with a
conventional timeline sequencer, and much less awesome to work with if
you are implementing a pattern sequencer, where the MIDI to be
delivered in the next N msec may have been altered in the last N msec.
Given the level of interest in pattern sequencers these days, this is
something to be considered. Its not a clear choice - its possible to
make either MIDI "queuing" design with either kind of "sequencer", but
it can make life a bit more difficult if the two are particularly
mis-matched for each other.
Quite true. There is a significant challenge here, especially if you
approach both timeline and pattern sequencing in the same way.
In fact, the same challenges appear when dealing with pure audio
"just in time" looping arrangements as opposed to tape-like timeline
recording.
The solution to both is to avoid random jitter or latency, and
instead build in a fixed latency in the reaction time. The human
brain can adjust to a constant, accurate latency, and the musician
will unconsciously alter the performance so that there is no problem.
Where you fail is by trying to react "instantly" to new inputs. If
your software is coded to immediately generate new audio or MIDI,
then the system will stagger and stumble, and basically not be very
useful.
Instead, establish a highly accurate time stamp on the incoming
events. For incoming MIDI, CoreMIDI will already have provided
this. For other types of input devices, you'll have to do the best
you can, but at least CoreAudio offers a highly accurate time base
that you can use. Once you have established the accurate time stamp
on the input event, add a constant latency to that value and schedule
your output event based on that fixed offset.
e.g. If you set 100 ms as your system latency, and there is MIDI data
coming in for your pattern sequencer, then be sure to alter the
output data 100 ms later, and not any earlier. If you try to
immediately react, your software will end up with random delays
depending upon when the event occurs in relation to various loops,
buffers or interrupts or other scheduling delays in your overall OSX
system. So, in this example, any output data between "now" and 100
ms in the future will still be sent based upon "old" information, but
at a particular offset the new information will take effect.
A big challenge here is that if you're combining MIDI, audio, and
other physical devices in the same setup, then you might have to set
a system latency that is a bit longer than the longest device delay
you have. For example, if you are using CoreMIDI by scheduling 10 ms
into the future, then you'll need a system latency of at least maybe
12 ms to 20 ms in order to give your software some decision time
before it has to commit to the output data. In other words, 10 ms
before output, CoreMIDI is already committed to the data, so your
needs to be changing its mind sooner than that.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden