Re: How to access the output samples of a multichannel mixer unit
Re: How to access the output samples of a multichannel mixer unit
- Subject: Re: How to access the output samples of a multichannel mixer unit
- From: Charlie Roberts <email@hidden>
- Date: Wed, 29 Jun 2011 10:29:03 -0700
Hi Richard,
Thanks for the verification that CoreAudio graphs aren't meant to be used in this way. For new users, I think whenever we hear the word "audio graph" we think of models like CLAM, ChucK or Jamoma where they are intended for exactly the level of granularity that I described. The AUGraph, at least on iOS, seems to be at a much higher level than these other models. Which is too bad! I would love for a thread safe way to connect and disconnect custom written unit generators to be built into iOS.
Along the lines of C++ toolkits for DSP I'm personally very fond of Gamma (which, like STK, also does not come with a graph):
Just disable gamma's audioIO (which relies on portaudio) and its soundfile class (which uses libsndfile) and it works great on iOS.
- Charlie
To jump in on this thread: this is surely a wrong use of the CoreAudio graph paradigm; it was not really designed for that level of complexity or granularity, any more than you would want to create and drive a polyphonic synthesiser in Logic by defining one track per voice. And as you see, for iOS the choice is extremely limited anyway. The idiomatic solution is indeed to create a custom synthesis engine and drive the whole thing in the callback. There is a lot of code "out there" that could be used as a starting point. PD has IIRC been ported to iOS, and the STK toolkit (free to use in commercial apps) is as easy to port as compiling a static C++ library (leaving out the bits you don't need for iOS such as the general cross-platform audio i/o stuff). There will be a small amount of fiddling if you want to rely on their "RAWWAVES" mechanism for using sample waveforms stored externally. While nothing really trumps knowing how to design your own dsp and synth engine, the STK is pretty comprehensive (includes modules to handle polyphony via MIDI, etc), and much of the time all you have to do is chain calls to the modules you choose to use.
https://ccrma.stanford.edu/software/stk/
You have to accept that natively the STK uses floats for everything; just convert the final stream to 16bit int (or whatever) in the usual way.
Richard Dobson
______________________________
_________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (
email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden