• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: app structure issues
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: app structure issues


  • Subject: Re: app structure issues
  • From: Philippe Wicker <email@hidden>
  • Date: Sun, 6 Oct 2002 18:40:37 +0200

On Friday, October 4, 2002, at 06:33 PM, Will Benton wrote:

Folks--

I'm getting used to the callback-based model for audio processing and was hoping to implement a simple pattern-based softsynth as a proof of concept. However, I'm having trouble coming up with a good way to structure my application in order to allow for control changes in an efficient way, without putting too much logic in the callback > function.


A MusicDevice AU has to manage 2 kinds of "external events": first, calls to the IO Proc callback, and second calls to any one of the functions/procedures of the MIDI API. Both of these categories of events are asynchronous. The IO proc is called back within a high priority thread at a rate depending on the size of the audio buffer and the sample rate. Midi API functions are called by some external code using your MusicDevice AU. In a real situation, this code will most of the time be executed as the consequence of a call back to a MIDI Proc. MIDI Proc is called back from a high priority thread too. Both IOProc and MIDIProc threads are different. When a note on, or a pitch bend, or a volume change, or any other MIDI command is passed to the MusicDevice AU, it is done within the context of a different thread than the one managing the audio buffer for the IOProc. You therefore have to do some synchronization work between the two threads.

I do agree with Urs Heckmann when he recommends you not to dynamically allocate objects (memory generally speaking) within the context of high priority code. The reason for that is that algorithms used for dynamic allocation are not deterministic (at least for a general purpose non real time OS), in other words, the time it takes to allocate some chunk of memory cannot be bounded. It may take a couple of microseconds or a couple of milliseconds depending on how the memory is fragmented. Another reason is that the allocation process works on a resource, the memory, which is necessarily shared. That is the access to this resource have to be locked for a certain amount of time when allocation process is running. You may then encounter situation where a low priority thread which is executing a "malloc" may locks a higher priority thread (BTW, by "allocation process" I mean "execution of the code to allocate" not a UNIX process).

That being said, I suggest you to distinguish 2/3 families of MIDI events:

1. "hard real time events": events to which you must respond as quickly as possible. In this category I would classify note on, note off. You have to manage these events as fast as you can because they impact the latency in a way which is very noticeable to a musician.

2. "not so hard real time events": I think of pitch bend commands, volume or pan. These data are sampled on MIDI keyboard about 50 to 100 times per second (10 to 20 milliseconds).

3. "soft real time events": I primarily think of program change, bank load, commands like that. Executing a bank load may imply disk access if all samples are not preloaded in memory (which should be done whenever it is possible). Such a low priority task should definitely **NOT** be executed in the context of any one of the two high priority threads. This implies a 3rd dedicated thread for lower priority tasks.

So we come to a 3-threads architecture, with the need of synchronization between them:

1. a "MIDI" thread: the work is done in the MIDIProc callback. An optional job could be there to filter MIDI events (for instance, ignore commands for MIDI channel 2, ignore some kind of control, ...). The mandatory job is to pass hard real time midi events to the IOProc thread, choosing a non blocking messaging algorithm (Kurt Revis - very active on this list - has posted sources that includes a non blocking FIFO, look in the archive for PlayBufferedSoundFile or find his web site).

2. an "AUDIO" thread: the work is done in the IOProc callback. When your code is executing, you should pop events passed by the MIDI thread from the FIFO and then process them. For instance, assume that your synth can play 32 notes max (poliphony of 32). If a note on occurs on a particular channel (that is in practice on a particular (multi-)sample of a sound), you may memorize that a new sample is active in appending a pointer to some useful informations (eg original pitch, length, loop locators, ...) in a linked list. This list may contain more than 32 pointers, but only the 32 firsts will be processed to generate an actual sound. When a note off occurs, the pointers has to be removed from the list. When this "preliminary" job is done, you have to compute the audio buffer(s) requested by the IOProc. This is done by walking through the list and compute whatever is necessary (pitch shifting, envelope, etc...). In this architecture, the DSP job is done within the context of the IOProc thread. So it has to be carefully designed to minimize the CPU cost. There may be some possibility to do such a job in a lower priority thread, but I cannot figure out how a good latency time can be reached then (not speaking of glitches and "holes" in the resulting sound). If someone has an idea about that, I'd be glad to read it.

3. a "Background" thread: probably the main thread of your application, it carries on all the other tasks eg loading banks from disk, responding to User actions, ...

One thing I'm not clear about is the interaction between GUI items and the jobs that has to be done in either of the 2 high priority threads. Events related to GUI are Carbon events. As far as I have understood, they are managed within an event loop which executes in the main thread (normal priority thread) of the application. So there is a synchronization issue. If the action on a GUI item implies that a bunch of data be updated in an atomic way, this bunch of data should been made available to "client" threads only when the whole bunch is updated. (Maybe a Kurt Revis FIFO could be used here also).

Regards.


Philippe Wicker
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

References: 
 >app structure issues (From: Will Benton <email@hidden>)

  • Prev by Date: Re: VirtualAudioDriver.kext?
  • Next by Date: Re: Editing MusicTracks while playing
  • Previous by thread: Re: app structure issues
  • Next by thread: Disable Gain Controls
  • Index(es):
    • Date
    • Thread