I'm fairly certain that you need an AUGraph set up for offline rendering. Then, users can select a range of audio to apply an effect. As soon as they hit "go" - or whatever UI element is supposed to start the effect rendering - your AUGraph can pull the selected audio through the effect or effects and save the resulting audio samples into your app's audio data.
Nice! I didn't even know there was such a thing as "AUGraph set up for offline rendering." I will be reading up on that, for sure :)
However, you'll also need an AUGraph set up for real-time rendering if you want to allow the user to tweak the third-party effect and hear the results. The results of this will probably just be sent to the speakers and not stored, at least not if the user is just previewing the effect and not intending to save the results before they hear something they like.
I imagine I can do this, if I set up a mixer properly to pan/attenuate each channel properly. I should probably read up on what all the different settings mean for that AudioChannelLayout struct, unless there's some other way to configure a mixer node
One important concept is that CoreAudio is a pull model. When playing to the speakers for auditioning the tweaks, the audio output hardware controls the timing and requests samples from your application, which uses the AUGraph to provide those samples.
I should be okay there. Sounds like the same model as the render callbacks for the Output unit.
You are correct that Apple provides a Generator AU that can read from a file,
Ah, if I absent-mindedly used the word "softfile" but that's just the name I use for the class that stores my audio data. It's all in memory, not on disk.
I hope the above distinction between real-time rendering to live audio output hardware versus offline rendering to memory or a file is clear enough for you to start your research.
Thanks, actually that part I've known from the start. What I've been trying to figure out is how to take the bits and pieces of audio from the buffers in my "softfile" class, and send them to the 3rd party AudioUnits as normal buffers (i.e.: as equal length buffers in an ABL, and "just in time" using the pull model, rather than rendering the whole duration at once)
You've mentioned having 3 or more channels on multiple tracks. I'm going to assume that users are listening to a mono or stereo mix down of these tracks while tweaking the effects.
Exactly, the preview audio needs to mix all the channels to stereo.
Are you modifying the original tracks and replacing their audio with processed audio?
Yes. If the original selection is 5 tracks, the preview should be stereo, but the result of the effect should return 5 discrete channels of processed audio.
I used AUGraph to process a hexaphonic guitar input, keeping 6 channels completely separate while applying the same effect in parallel to each track.
That's basically what I need to do. I might make an exception if the user only selects two tracks since they'll probably expect a proper stereo effect in that case. Or I might add a "treat files as stereo" tick box. Anyways
That particular app had no mix down. At the other end of the spectrum, I've also worked on systems that had 8 microphone channels which were all mixed down to 1 mono output for listening, and I used an AUGraph to compress the 8-track recordings while keeping each channel separate.
Neat, that would be handy for getting ducking/breathing effects, etc. Maybe I should make that tickbox "treat files as discrete" instead of limiting it to stereo.
If your case, you'll need to manage an audio selection. That means some user interface that might be in Swift,
I just finished that last week. Multiple selections, too, like SublimeText. It took me a whole month though :(
and some sample arrays that hold the audio along with some structures that point to the start and duration or end of the current selection. When the user wants to "listen" or "print" the effect, the AUGraph will have to be able to find the audio data from your app's arrays. You probably don't need to write an AU for this. Either the AUGenerator can handle it, or you can probably just hook in a render callback that will grab the correct audio samples from the selected sample arrays as needed.
That's the part I'm the foggiest on how to do. But I you have been very helpful, and I have some ideas now, of what to read up on next :)
As for displaying the third party AudioUnit window, you'll only really need that while the user is changing the AU parameters.
I'm good there. That's covered by the Apple "CocoaAUHost" sample code I ported to Swift yesterday. It was relatively painless (by CoreAudio standards)
I think you might be on the right path, so try coding up individual pieces and get them working. Then you can combine all the necessary parts into your application.
Thanks again! CoreAudio list is amazing.