Re: Stuck MIDI notes
Re: Stuck MIDI notes
- Subject: Re: Stuck MIDI notes
- From: Philippe Wicker <email@hidden>
- Date: Thu, 31 Oct 2002 00:32:02 +0100
On Wednesday, October 30, 2002, at 10:00 PM, Robert Grant wrote:
Can I call MusicDeviceMIDIEvent() from the MIDI receive proc thread
even though the
MusicDevice was created in the main UI thread?
The MusicDevice creation thread should not matter. In my opinion the
real question is how does the MusicDevice handles MIDIEvent? BTW, what
is the music device(s) you are using? Did you measure the time spend in
a call to MusicDeviceMIDIEvent() by measuring the host time before and
after the call with AudioGetCurrentHostTime(). This would be a very
informative data.
You say that some note ON are not paired with note OFF. Did you loose
those midi events at the input of your MIDI Read proc (packet lists
missing), or were they lost in the communication with the AU? I don't
believe that the packet list passed to the MIDI Read proc may be
modified by the MIDI Server while the Read proc is still working on it,
this would be a very poor design. I would say that a whole packet list
may be lost (ie not passed by the MIDI Server) should an overrun
condition occurs.
In my experimentations, I could observe that even on heavy load
conditions (nearly 100% CPU), midi events were correctly delivered to
an endpoint (though sometimes with a high latency). Under the same
conditions, I could observe that a commercial soft synth (no, I won't
tell you which one -:) ) was loosing notes from time to time, while a
simple endpoint running in parallel and receiving the same events did
not loose even one event (I recorded in a buffer every received events,
managed a counter/decounter for each note on each channel, and finally
checked that all counters were 0 at the end). I am quite confident in
the reliability of the MIDI server.
But here we have to deal with an application that mix MIDI and audio.
Midi events are acquired in the MIDI Read proc which runs in the
context of a dedicated thread. Audio buffers are handled in the context
of an other dedicated thread and consumed following the call to an
IOProc callback. The audio model is a "pull" model. On the other hand,
calling MusicDeviceMIDIEvent() looks more like a "push" model.
Moreover, events on the MIDI side and the audio side are totally
asynchronous. The MusicDevice should be designed to "conciliate" these
asynchronous and somehow contradictory working modes. It should be
designed so that the time spend in MusicDeviceMIDIEvent() call be as
short as possible. It should be designed to decouple MIDI and audio
"events".
A last potential source of problem (this is more a question to
qualified guys than an assertion) is that both MIDI and audio threads
are high priority threads. So if one of them is working, it will keep
the CPU resource until it willingly yield the CPU, or is preempted by a
thread of higher (possibly equal depending on the scheduling policy)
priority. So if both MIDI and audio threads are eligible at the same
time and if they have the same priority, they will get the CPU by turns
by period of 10 milliseconds (the assumed system tick). This will
induce a 10 ms worst case latency penalty on the MIDI side, but more
seriously some glitches on the audio side if the size of the audio
buffer is small (a 128 44.1 KHz buffer is equivalent to a 2.9 ms time
slice).
Maybe Apple specialists could enlighten us on these points?
Best regards,
Philippe Wicker
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.