Re: C AudioUnit
Re: C AudioUnit
- Subject: Re: C AudioUnit
- From: Brian Willoughby <email@hidden>
- Date: Sun, 31 Jan 2016 20:18:17 -0800
On Jan 31, 2016, at 6:44 PM, Charles Constant <email@hidden> wrote:
>> However, you'll also need an AUGraph set up for real-time rendering if you want to allow the user to tweak the third-party effect and hear the results. The results of this will probably just be sent to the speakers and not stored, at least not if the user is just previewing the effect and not intending to save the results before they hear something they like.
>
> I imagine I can do this, if I set up a mixer properly to pan/attenuate each channel properly. I should probably read up on what all the different settings mean for that AudioChannelLayout struct, unless there's some other way to configure a mixer node
I believe it's called AUMixer, and there's also an AU3DMixer. The first one does standard mixing, where the second one can actually handle time delays and other psychoacoustic processing to simulate 3D audio. If I recall correctly, it's rather confusing the first time, because you have a matrix of volume levels for every possible input channel to output channel. In other words, I think that you don't have pan so much as a level for channel 1 to left and channel 1 to right, then channel 2 to left and channel 2 to right, etc. There might be a stereo mixer that handles traditional pan and/or balance, but it's been a while since I looked at the list of mixer effects available.
The nice thing is that your UI can give the user simple pan controls, while your code handles any necessary translations from a single pan knob to individual channel levels.
>> I hope the above distinction between real-time rendering to live audio output hardware versus offline rendering to memory or a file is clear enough for you to start your research.
>
> Thanks, actually that part I've known from the start. What I've been trying to figure out is how to take the bits and pieces of audio from the buffers in my "softfile" class, and send them to the 3rd party AudioUnits as normal buffers (i.e.: as equal length buffers in an ABL, and "just in time" using the pull model, rather than rendering the whole duration at once)
I'd keep the audio in objects (whether true ObjC objects or plain C structures) and then have "state" variables for current position. For each buffer, pull operations would take the requested number of sample, starting from the last known position, then update the position by adding the number of samples used. If your position hits the end of the audio buffer, just return silence for the samples that aren't available, and keep the current position at the end of the buffer so you know to continue to return silence in the next render. This technique allows you to have tracks of different lengths and still mix them together into a single time line.
>> You've mentioned having 3 or more channels on multiple tracks. I'm going to assume that users are listening to a mono or stereo mix down of these tracks while tweaking the effects.
>
> Exactly, the preview audio needs to mix all the channels to stereo.
Should be easy enough with AUMixer, as mentioned above.
>> Are you modifying the original tracks and replacing their audio with processed audio?
>
> Yes. If the original selection is 5 tracks, the preview should be stereo, but the result of the effect should return 5 discrete channels of processed audio.
In this case, you have a couple of options. You could simply have a mono AUGraph and then run the offline process 5 times in a row, once per track. The other option is a 5-channel graph without an AUMixer (since you aren't combining channels), where you pull 5 channels of output through effects set up with 5 channels, taken from the original 5 tracks of data. This is a lot like the hexaphonic guitar effects AUGraph that I mentioned.
>> and some sample arrays that hold the audio along with some structures that point to the start and duration or end of the current selection. When the user wants to "listen" or "print" the effect, the AUGraph will have to be able to find the audio data from your app's arrays. You probably don't need to write an AU for this. Either the AUGenerator can handle it, or you can probably just hook in a render callback that will grab the correct audio samples from the selected sample arrays as needed.
>
> That's the part I'm the foggiest on how to do. But I you have been very helpful, and I have some ideas now, of what to read up on next :)
As suggested above, keeping a sample offset position as "current state" for each track should allow you to fill the buffers as requested by the AUGraph pull. You might need some fancy memory copies to handle interleaving multiple tracks into one multi-channel buffer. I seem to recall that AudioUnits went the way of handling everything as one-channel buffers, deprecating the multi-channel options, but I forget. You might even be able to use the AudioConverterFillComplexBuffer() function for this.
Good luck!
Brian
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden