Hi,
The process of building tools for music is about designing compelling sounds and intuitive interfaces to control them. The first step in your path as a developer is understanding how a computer generates and controls sound, and that's the field of synthesisers.
Now, if you have the mathematical function for a sine wave, it takes roughly 5-20 times more lines of code to generate a sound out of it on Core Audio than to generate a drawing out of it on Core Graphics. This is because you have to do an awful lot of setup, configuration, and fine-tuning in order for you to feed that sine wave to the speaker in realtime (I still don't understand why Apple inflicts that on audio devs isn't it ironic that no one in Cupertino hears our pain?).
So here's something I wish someone would have told me as I was getting started: writing Core Audio set up code should always be a very small part of the process of building tools for music. This is because it's dry, wearisome, and uninspiring. You'll quickly find that three weeks and 500 lines of have gone by, and still no sound. So use pre-existing code.
Here's a very nice tutorial that shows you how to generate and control a sine wave sound with Core Audio:
Once you feel comfortable with the generation and control of sine waves, take a look at audio synthesis techniques. Musicdsp.org has a great source code archive for that:
A few years ago, the makers of Ocarina ported an audio tools library to iOS. It's called the Synthesis Toolkit, and it's full of great ready-made synthesisers:
Here's a tutorial on how to use one of their physical modelling synthesisers:
Another great audio library for iOS is Maximilian:
If synthetic sounds are not your thing and you want instruments that make sample-based sounds, look at this tutorial:
Violins are made of wood, but you don't need to be an expert in botany to make good violins. If you're building tools for music, then you need to concern yourself with what's important to your users: beautiful and unique sounds, and an intuitive interface to control them. In other words, an engaging audio-based user experience. Getting your Core Audio set up right is just a small part of it. Focus on what's a good concept for a musical instrument. Focus on making it look and sound natural and organic. Focus on giving it a personality. Think about making clever uses of the device's sensors, such as the built-in mic or accelerometer. Spend time refining your synths.
Subject: aspiring CoreAudio devs
Date: 3 August 2012 5:10:34000 pmGMT+01:00
In searching the archives, I see that this comes up fairly frequently, but as the resources constantly evolve and update and the only thing that seems to stay the same is 'constant change', it seems reasonable to post it again.
I'm a musician and fairly sophisticated computer user. To deepen my appreciation of both computers and music, I've decided to learn to program for the Mac. I'm on an Audio "track" but the path enlightenment in this area is a little hard to see. I've bought Programming in Objective-C 2.0, by Steve Kochan, and Cocoa Programming For Mac Os X by Hillegass and Preble. Both books have been great, and very informative. They've built tons on top of my basic C/procedural understanding and given me deep insight into Mac OS X. Both books however leave me feeling a little underprepared for audio programming and I'm wondering what's next.
My question for the group would be - what would you prescribe as an ideal path for learning, as an audio minded dev in August of 2012? What is there, no matter how small or trivial sounding, that you wish someone would have told you as you were getting started?
I'm not all that interested in the business of software and I don't imagine I'll be selling applications at all, but I'm really interested in the process of building tools for music.
I'd like to gather some of your input to make a "Getting started in Audio Programming" guide on the web at some point.
Thanks for reading, and in advance for any input you might have.
Joshua Case NY, NY USA
|