Re: iPhone AU best practices
Re: iPhone AU best practices
- Subject: Re: iPhone AU best practices
- From: Admiral Quality <email@hidden>
- Date: Wed, 16 Jun 2010 19:55:17 -0400
On Wed, Jun 16, 2010 at 7:06 PM, uɐıʇəqɐz pnoqɥɒɯ <email@hidden> wrote:
> Thank you all for the responses. Think of me as a sponge, trying to soak up all this audio engineering information, and excuse me if I get a little saturated at times — I come from a networking background, with issues such as trying to keep up with 10Gig traffic in an Ethernet driver, so callbacks, overruns, ring buffers without disk access, all at interrupt time, while similar to AU callbacks, have a whole other set of physics driving them.
>
Yeah. Some musical understanding really helps in designing musical
instruments. A lot of things aren't as obvious as you might think. And
to a non-musician, they might never notice them. (Latency is a good
example. Non-musician hits a pad and a sound comes out a quarter
second later and they're happy... but that's completely useless to a
musician who's trying to *play* it.)
We're getting kind of OT for this list which is about platform
specifics, but I'll do my best on the rest of your questions...
> Please read on....
> Mixing is easy, but I was hoping to use the mixer in order to have independent control over the volume of each note. Plus, it is a
I'm not sure how you're generating your notes, but I'd imagine you're
either playing back samples, or synthesizing the note from scratch. If
playing back samples you need something called an envelope generator,
if for nothing else to do the kind of release "fade" that you're
talking about here. Fading/volume scaling samples is easy too, it's
just multiplication. And if you smoothly ramp down your scaling
factor, sample by sample, during what we call the "release segment" of
the envelope, then you'll get nice smoothly fading tails to your
notes. (Or maybe your original samples already have smoothly fading
tails to them? Like drum samples would. In this case you can just keep
playing them at full volume until the sample is over, because the
fades are built in.)
If you're synthesizing from scratch then you probably already have an
envelope generator. But as the concept seems new to you, I'm guessing
not.
>lot easier to play each note separately in its own callback rather than combine all the playing notes together; I am also assuming that doing a memcopy of each sound through the mixer is less expensive than doing an addition of all the playing notes, and individually copying each 8.24 value to the output, but skipping the mixer. But, I am not lazy, if that is the better way to do it, then I will. The questions then are:
>
So, I'm guessing you're using some high level API to load and play the
samples? I got the impression from your original post that maybe you
were beyond that as you mentioned you had improved, what I assume is
your latency performance (you referred to better results "when the
user quickly taps a number of notes", by which I assume you mean the
latency. The time delay between when the user hits a pad, and the
sound starts to come out. For musically playable instruments I
personally can't stand anything longer than 50 ms, and for percussive
sounds like drums and even piano I want even less.)
> 1. Is it better to skip the mixer and the memcopy and replace them with addition of a number of 8.24 values, where each of the values could also be multiplied by a fade-out value? I would no longer have to worry about mixer bus limits.
> 2. I am assuming that the mixer is currently doing some work to avoid clipping. If I add values, how do I prevent clipping?
> 3. could I use floats and then turn them into 8.24 before I copy them to the outBuffer? If I can use floats, can I use the Accelerate framework for the fade multiplication and
> 4. If the mixer more efficient than adding myself, then how do I tell it to stop calling the callback for inactive buses?
> 5. And if the mixer is more efficient, is it also best to use its input volume control for fading, or to manually adjust the data to ramp down in amplitude?
>
Most of these are too iPhone specific for me to answer
authoritatively, I'll leave that to someone else.
> I can easily add a flag to my instruments to determine if a note can be re-triggerred or "piled up".
"Cycle assignment", use the commonly agreed upon terminology! :)
>I'm going to have many flags like this for other characteristics, such as instruments that can be faded to mimic a Release vs those that have to have an actual sampled release. Also I have to worry about the location of ADSR components - mainly S, in order to loop appropriately.
Ah good, you know the concept of ADSR. That's a type of envelope. And
yes, looping will be another issue if you want to build a full
featured sample playback synth (and what a can of worms you're in for
there!)
Trying to apply a volume envelope by using a system mixer service and
trying to automate the controls will probably result in what we call
"zipper noise"... sudden steps in your scaling that are quite audible
(that is unless the mixer service you're using has some kind of
smoothing built in, I don't know offhand.) This is why I'd recommend
doing all this mixing in your own code.
> Any links for voice assigners? My app right now will play one instrument at a time. In the future, I will allow recording of each instrument separately, and then playback all instruments at the same time.
Just google "voice assigner". And if you need a demo, try my product
(it runs in a free demo mode). At the top of Poly-Ana's interface is a
display showing which voice numbers are currently active. It can help
you visually get an idea of what's going on. Also, Poly-Ana offers all
kinds of control over the voice assigner, so you can duplicate
virtually any of the common behaviors.
http://www.admiralquality.com/products/Poly-Ana/
And if you need a host to plug it into, try Reaper
http://reaper.fm
Hooking up a MIDI keyboard controller will REALLY help. But in the
worst case you can just play Poly-Ana's GUI keyboard. (While you can
only mouse one note at a time, you'll see the sustaining notes
continuing to use voices. And there's a button to the right of the
keyboard that will force all notes to sustain.)
And you refer to "one instrument at a time" above. Synths that produce
more than one instrument sound at a time are called "multitimbral".
But playing more than one note *of the same instrument sound* is
called "polyphony". If an instrument can only play one note at a time
(like say, a flute), it's considered "monophonic". Important concepts
here.
> I have been trying to read up on this all, but I have finite time so I am still catching up. I know there are some synths out there like FluidSynth, or other SoundFonts engines, which might be a shortcut. However, the ones with incompatible licenses or expensive royalties will force me to reinvent where necessary. The economies of a <$5 app makes it hard to do otherwise.
>
There's plenty of free synths out there for all platforms. And if you
want to see an iPhone synth that kicks major butt, try BeBot.
http://www.normalware.com/ (Wow!)
> As for playing a note a second time before releasing the first instance, is currently not possible in my UI. if you are holding a note down, the next tap on the same note is ignored.
>
Like I suggested, you might do better to actually make that an off/on
event rather than just ignore it. It will feel more responsive to the
user as it's a lot harder to play exclusively staccato (meaning
there's some space between successive note hits) than legato (meaning
note ends can overlap with note beginnings). If you force staccato
triggering on them, they'll feel it's missed the hits from when they
weren't able to get their previous finger off in time (assuming two
fingers can fit onto the same pad target area).
>>
>> OP was also wondering about worst case of "10 fingers". But isn't
>> there a limit to how many simultaneous multi touch points are
>> possible?
>
> It's 11 on the iPad (fingers and nose), it seems. And 5 on the iPhone, although I could swear I have seen more at times.
>
Well, assuming one hand is holding the phone, I think 5 is perfectly
reasonable (however in real-life I have been known to adjust synth
sliders with my nose!)
> Again, thanks! and if you have more tips on building an instrument synthesizer, toss them my way.
>
> -mahboud
>
Oh, I could go on and on about it. And so could many others. It's a
non-trivial application with LOTS of different variations on the
solution out there. But what I wanted to make you aware of is that
these are issues that for the most part were solved in the mid-70s
when the first polyphonic synthesizers started to appear. And guess
what made that possible? The computer! (You needed a computer to scan
the keyboard so that chords were possible. Before that, synths were
all monophonic -- meaning they played only one note at a time.)
Feel free to email me if you have any more general instrument design
questions that are OT for this list. Worst I can do is ignore you.
Really I just wanted to point out that these issues and the various
solutions are well understood by instrument designers and experienced
users.
Cheers,
- Mike/AQ
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden