Conceptual question about a sequencer project
Conceptual question about a sequencer project
- Subject: Conceptual question about a sequencer project
- From: Ryan Dillon <email@hidden>
- Date: Sun, 29 Aug 2010 23:26:08 +0200
I've been watching this mailing list for awhile now and have been very glad to see so many of my own questions come up recently - like many others it seems, I'm wondering about how to set up a sample accurate sequencer/metronome on the iPhone. Slowly but surely, the pieces of information - from the documentation, this list, forums and blogs - are beginning to come together in my brain and make sense. Since this topic seems popular right now, I thought I'd throw in a few of my own questions. I'm not even at the point yet where I have much code to show, but I thought the mailing list seemed like an OK place for conceptual questions as well, so here goes:
When it comes to playing back longer audio files, it seems to make sense that the audio buffer size can be very large, i.e., you can prepare and load many seconds worth of samples all at once if you want to do it that way. The audio file is static, and when the system needs more samples, you just pick up where you left off and feed in the next set in the callback.
My question is, what is the optimal strategy to use when playing back audio that ISN'T static? For example, the user may, at any time, choose to change the tempo of the metronome, or alter the pattern of the sequence, and I would want to be able to reflect that change as soon as possible. Does this mean that the buffer size needs to be kept small so that the callback function is called more often? Let's say your original tempo is at 120 bpm, so at 44.1 khz, that's a beat starting every 22500 samples. Now, if the callback function asks you for one second of audio at a time, that's going to be two metronome "clicks" already passed on and ready to play. In this case, if the user changes the tempo, then depending on when in the cycle they change, you might have to wait almost a full second before the "clicks" with the new timing reach the output. Is this a proper understanding of the way this works? If so, then it seems that the callback does need to occur much more often in order to keep the process flexible and responsive. How would you achieve this, and what is the right balance between stability and flexibility, when it comes to buffer size/callback frequency?
My second question also has to do with the callback function. The way I understand it, buffers in the Audio Queue Services system operate on a rotating/ring system. Usually one buffer playing, one ready to play, and one filling up. From the documentation, it sounds like there is no absolutely definite timing to the process of filling the buffers in the callback. If the system needs more audio soon, it will ask when it can. However, it seems like in the case of low-level Core Audio and RemoteIO, the callback IS called at very specific intervals. Is this true? For example, is the time between callback execution and the first sample "hitting" the speaker always the same for a given audio session (sample rate, AUGraph setup, etc.)? If so, then I can see how you can use that to time interface updates that correspond to, say, the metronome clicks - you know when you passed of the audio samples, so you can calculate when they will be heard. If the process doesn't occur at absolutely regular intervals however, what is the best way to keep the UI in sync?
Thanks,
Ryan _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden