Re: CoreMIDI question
Re: CoreMIDI question
- Subject: Re: CoreMIDI question
- From: Kurt Revis <email@hidden>
- Date: Mon, 29 Apr 2002 19:30:27 -0700
On Monday, April 29, 2002, at 06:29 PM, Doug Wyatt wrote:
Well, virtual sources behave exactly like driver-owned sources from the
perspective of MIDIReceived's implementation, which means that packets
get passed on to the client immediately, and it's the client's job to
interpret the timestamps correctly.
Right, that's how I understood it. The problems with the current setup
are:
1) It's difficult for the application with the virtual source to send
its events (via MIDIReceived()) right on time, due to the usual
preemptive scheduling issues (although using a time-constraint thread
may help a lot).
CoreMIDI already has a very good scheduler to do this, and it seems a
shame to have to reimplement it.
2) Generally, clients haven't had to do much interpretation of
timestamps of input events. They can assume that when an event comes in,
it has already happened, and the timestamp indicates when it happened
(or as close to that as possible given hardware limitations).
(It sounds like this will change in Jaguar for virtual destinations, for
apps which set a schedule-ahead property on the virtual destination, but
that's OK since the app will then be expecting to get events with
timestamps in the future.)
However, it seems unrealistic to me to expect *all* client apps to look
at the timestamps of input events, notice the ones which are in the
future, and stick the future events in a queue to be handled later.
It would be bad to defer delivery of packets with future timestamps and
assume that the client won't care to receive them until the timestamped
time.
Can you explain more why this would be bad? That sounds exactly like
what the CoreMIDI scheduler does for outgoing events sent by
MIDISend(). (Taking into account any schedule-ahead amounts, of course.)
If you could please give an example of how you'd like to use this, it
would help me think about how to address the issue.
Basically, it comes down to a UI issue. Let's say I have an app which
generates a MIDI clock, for the use of other applications. There are two
possibilities for hooking together the two apps.
1) The Clock app just runs by itself, with no UI for setting where the
clock events go. Other apps can select it as a source, as desired.
2) The Clock app provides a list of destinations to send the clock to.
No UI is required in the other apps.
Obviously in case 1 the Clock app is using a virtual source, and in case
2 it's using the usual output port / MIDISend setup.
The problem I have, right now, is that I am not really sure which of
these UIs is preferred. (I could imagine either case being better,
depending on the exact application.) Apple hasn't really given any
guidance in this regard yet, as far as I know.
So in the lack of any prevailing direction, I'd like to make my app (or
at least its underlying architecture) as flexible as possible. However,
because the lack of scheduling makes it harder to implement a virtual
source, this boils up to the UI level, and it seems that case 2 is
preferred. Is this the message I should be getting, reading between the
lines?
Philisophically it would be nice to have a symmetrical system, in which
it was just as easy to implement a virtual source as a virtual
destination. I am still not sure if you guys are making the system
asymmetrical intentionally, or if it's just falling out that way because
of other design decisions. I can live with it either way--I'm sure you
have good reasons for how you implement it-- but it would really be
helpful if you could explicitly say one way or the other.
If I'm not making any sense here, or no one cares about this but me,
feel free to say so!
--
Kurt Revis
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.