Thanks Brian. Lots of good advice there!
> You mention Swift, and I'd say that is the language you should avoid when writing an AudioUnit.
Nope, I have no plans on using Swift to do that side of things. By coincidence, I believe it was Paul who pointed this out to me last on the list last year! So I know now not to do anything mission-critical in Swift. It works great for most everything else though, basically everything aside from rendering samples, or rendering audio.
> That's because some of your variables might be shared across all channels
Thanks for the tip. I think what I need to do will be a good simpler. I'm hoping I can send all the settings I need (static information about pan and volume, pointers to my buffers, and information about where the buffers start and end) bundled together in a struct like one can do in a render callback.
> theoretically possible to write an AudioUnit in Objective C, but you'd need to be an expert on the language to avoid non-real-time aspects
Hrm, the Obj-C route appeals to me quite, even if I finish by porting back to C++. For starters, it's a good way to get familiar with Apple's AUBase.cpp class. It also means I can get started straight away, rather then take a day or two to figure to how to wrap C++ classes properly for Swift. Of course, if I can't keep it running long enough to even test, there's no point.
> I suggest that you look for more AudioUnit hosting examples
The basic hosting side isn't as wretched as I had expected. Yesterday I ported Apple's "CocoaAUHost" sample code to Swift. But I still haven't done anything that isn't streaming live to the mixer unit. I would like to find a simple overview of how to offline render 3+ discrete channels of audio, but I'm not sure there's much sample code out there that deals with something that specific.