Re: How to access the output samples of a multichannel mixer unit
Re: How to access the output samples of a multichannel mixer unit
- Subject: Re: How to access the output samples of a multichannel mixer unit
- From: Richard Dobson <email@hidden>
- Date: Wed, 29 Jun 2011 10:24:19 +0100
On 29/06/2011 08:13, Daniel Lam wrote:
Hi Brennon,
I've got AUGraph set up with a mixer unit and a remoteIO unit. I have render
callback fns attached to the input buses of the mixer, running fine and
happy, except when there are distortion caused by multiple input buses
supplying sound at the same time.
Yes I agree that doing things like "saw generator -> low-pass filter ->
envelope generator -> reverb -> remoteIO (output)" in a render callback
shouldn't be too difficult (not super easy for me thou :) However I'm
thinking about more sophisticated soft synth scenarios, like when multiple
sounds need to be created and mixed and each has a different effect to be
applied. I was hoping that AUGraph may help to define a framework for that
but I'm not so sure any more, after knowing that custom audio unit is not
possible. What do you think?
Daniel
To jump in on this thread: this is surely a wrong use of the CoreAudio
graph paradigm; it was not really designed for that level of complexity
or granularity, any more than you would want to create and drive a
polyphonic synthesiser in Logic by defining one track per voice. And as
you see, for iOS the choice is extremely limited anyway. The idiomatic
solution is indeed to create a custom synthesis engine and drive the
whole thing in the callback. There is a lot of code "out there" that
could be used as a starting point. PD has IIRC been ported to iOS, and
the STK toolkit (free to use in commercial apps) is as easy to port as
compiling a static C++ library (leaving out the bits you don't need for
iOS such as the general cross-platform audio i/o stuff). There will be
a small amount of fiddling if you want to rely on their "RAWWAVES"
mechanism for using sample waveforms stored externally. While nothing
really trumps knowing how to design your own dsp and synth engine, the
STK is pretty comprehensive (includes modules to handle polyphony via
MIDI, etc), and much of the time all you have to do is chain calls to
the modules you choose to use.
https://ccrma.stanford.edu/software/stk/
You have to accept that natively the STK uses floats for everything;
just convert the final stream to 16bit int (or whatever) in the usual way.
Richard Dobson
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden