Re: Simple, Intuitive CoreAudio Framework
Re: Simple, Intuitive CoreAudio Framework
- Subject: Re: Simple, Intuitive CoreAudio Framework
- From: Tim Hanson <email@hidden>
- Date: Tue, 3 Sep 2002 09:56:02 -0400
Hi
I'm working on pretty much exactly that - a signal-flow type (also
python - scriptable) approach to sound synthesis. If all goes well, I
will be done at the end of this semester, whence the source will be
GPLed and we will sell custom boxes with knobs etc for intuitively
manipulating sound to offset the development costs. If all does not go
well my employer (Cornell University) will want to charge money for the
software and the boxes.
So.. just wait... I'ts going to kick a** and be free, even if I have to
leak a few copies... some way of giving back to the community.
And it presently runs (makes noise and does not crash) on Linux, MacOSX,
and Windows.
good luck in your endeavors, i dig your ideas
Tim
On Monday, September 2, 2002, at 05:49 PM, Daniel Staudigel wrote:
There is an extreme shortage of cheap, powerful audio software for
MacOSX (for all computers?), which is causing a lack of home-made
songs, and a pressure to get the big stuff, which I cannot afford. I
think that a free, simple, and intuitive Audio framework that is a
superset of CA would solve this problem.
If anybody wants to help design/implement such a framework, the more
help the better.
My proposed model is an Object Oriented "Network of Nodes", where nodes
can be "Inputs" "Outputs" or "Filters". Inputs are audio sources,
MIDI, Microphones, or Sound Files. Outputs are speakers, MIDI files,
or sound files (QTSS?). Filters are nodes that convert one format to
another (MIDI stream to normal stream), and/or change the midi/normal
stream. (By normal i mean AIFF-type streams). All streams would be
monochannel, monodirectional, and to make a simple player, you would
simply instantiate an AIFFFile class, a Speaker class, and connect
left - to - left and right - to - right. All nodes have an arbitrary
number of inputs/outputs. You might connect them up in an app like
InterfaceBuilder (LabView?), or it could be all programmatic.
This would be a simple superset, and I'm sure not very hard. I've
tried simple implementations, but they all ended up failing because of
the complexity in the carbon/java calls. As I understand it this is
pretty similar to the way CoreAudio is organized, but I found it
extremely difficult to implement anything more than a sinewave thing,
random noise, and a simple (and crashy) link from mic - to - speaker.
Any ideas?
Daniel
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.