Thank you, all, for the advice, though I admit much of it I don't understand.
I thought I'd update the list as a courtesy, and also to summarize my own thoughts:
The short story is that I have decided to move back to C-based callbacks, using the old API, for anything that is time sensitive.
It was actually even easier than I had expected to set up write my class using the new API in Swift (it took an afternoon). This is the way I proceeded:
1) create, and configure (attachNode, connect), a simple graph of AVAudioEngine, AVAudioPlayerNode, and AVAudioMixerNode. Start the player node with .play()
2) set up a block using async dispatch_async so that we don't block the main thread...
3) start iterating over my audio buffers (floats), repeatedly creating an AVAudioPCMBuffer, copying samples to it using buffer.floatChannelData.memory and .scheduleBuffer'ing them to play (player.scheduleBuffer atTime 'nil')
The result was initially encouraging, playback started reasonably quickly, and was not choppy.
So why am I abandoning this?
Because, as I am guessing some of you were trying to warn me, I can't think of a simple way to control the exact time at which playback starts. I can't be doing this stuff on main thread, obviously, so I'm stuck with dispatch_async and the system may not want to execute my blocks as quickly as I do.
While the lag (i.e.: before playback begins) I see may be acceptable for the task at hand ( < 1 second ), I don't want to continue down a road where the lag is changing outside my control. The only information I see on
developer.apple.com about real time scheduling is in their kernel programming guide. Assuming anything that guide even works with the current version of Swift, it strikes me as obvious, that it will be easier to give up, use C, and allow the system to handle the scheduling for me via callbacks.
Maybe this will be useful for someone else. I'm eager to hear further comments if anyone has any.