I’ve been reworking an existing project that uses core-audio to now use AVAudioEngine and I am need of some assistance.
I am currently forcing AVAudioEngine to render offline, using AudioUnitRender on the output node – the setup is non-standard but it works.
I am using Swift and MacOS, not iOS, it’s been a bit of a roller-coaster but I’m making (slow) progress
I am seeking direction on how you can drive audio source(1), using audio source(2). My specific need to perform a panning effect back and forth at a specific frequency.
I believe that I could also perform other audio magic such as fade-in, fade-out using the same basic idea. It would be so much better if there is a high-level way to manage this however, I will go low-level if needed.
Here are the ideas that I’ve investigated or attempted:
Add an audio unit effect.
Add a tap and use the output as a source make adjustments to the other source
Adding a render callback to an existing mixer node
Add an AVAudioIONode?
Use an AUAudioUnit to access the buffer list data (latest idea being investigated)
I looked at AudioUnitV3Example and was completely out of my depth, considering writing an audio effect
Unfortunately, I can’t use a tap to monitor and drive the source because working offline, the tap doesn’t get data – I tried. I would also need to decrease the buffer significantly for granularity sake if it did.
I’ve been attempting to access the underlying Audiounit of one of the mixer nodes or see if it is possible to add or access a render callback – not successful yet.
It took me a couple of weeks to get a render callback on the input node working so I thought I could do so on a mixer node – that battle is still being fought.
In researching AVAudioIONode, I haven’t determined how do you get access to the input stream
Here is the latest idea being investigated, AUAudioUnit
I looked into AUAudioUnit and have been unable to determine how you can create a node that accepts input
I’m not getting the input but do I need it, since I have access to the output. Is access to the input provider necessary?
Q1: should the componentType be kAudioUnitType_Mixer to get input and output?
Q2: can this be non-interleaved on MacOS
Q3: how do you get the input provider working? Is this necessary, since I have access to the output provider
Q4: How do you create a component an insert it in the between nodes – let’s say between a mixer and distortion node, not just at the end
Thank you,
W.
do {
let audioComponentDescription = AudioComponentDescription(
componentType: kAudioUnitType_Output,
// Q1: should this be kAudioUnitType_Mixer to get input and output?
componentSubType: kAudioUnitSubType_HALOutput,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0 )
if (auAudioUnit == nil) {
try auAudioUnit = AUAudioUnit(componentDescription: audioComponentDescription)
let upstreamBus = auAudioUnit.inputBusses[0]
let audioFormat = AVAudioFormat(
commonFormat: AVAudioCommonFormat. pcmFormatFloat32,
sampleRate: Double(sampleRate),
channels:AVAudioChannelCount(2),
interleaved: false )
// Q2: can this be non-interleaved
auAudioUnit.isInputEnabled = true
auAudioUnit.isOutputEnabled = true
auAudioUnit.inputHandler = { (actionFlags, timestamp, frameCount, inputBusNumber) in
print("handling input,calling method to fill audioBufferList ")
// Q3: not working but is this necessary, since I have access to the output
}
auAudioUnit.outputProvider = { (actionFlags, timestamp, frameCount, inputBusNumber, inputData) -> AUAudioUnitStatus in
print("handling output,calling method to fill audioBufferList ")
return 0
}
}