Re: Conceptual question about a sequencer project
Re: Conceptual question about a sequencer project
- Subject: Re: Conceptual question about a sequencer project
- From: Patrick Muringer <email@hidden>
- Date: Sun, 29 Aug 2010 23:57:29 +0200
Le 29 août 2010 à 23:26, Ryan Dillon <email@hidden> a
écrit :
I've been watching this mailing list for awhile now and have been
very glad to see so many of my own questions come up recently - like
many others it seems, I'm wondering about how to set up a sample
accurate sequencer/metronome on the iPhone. Slowly but surely, the
pieces of information - from the documentation, this list, forums
and blogs - are beginning to come together in my brain and make
sense. Since this topic seems popular right now, I thought I'd throw
in a few of my own questions. I'm not even at the point yet where I
have much code to show, but I thought the mailing list seemed like
an OK place for conceptual questions as well, so here goes:
When it comes to playing back longer audio files, it seems to make
sense that the audio buffer size can be very large, i.e., you can
prepare and load many seconds worth of samples all at once if you
want to do it that way. The audio file is static, and when the
system needs more samples, you just pick up where you left off and
feed in the next set in the callback.
My question is, what is the optimal strategy to use when playing
back audio that ISN'T static? For example, the user may, at any
time, choose to change the tempo of the metronome, or alter the
pattern of the sequence, and I would want to be able to reflect that
change as soon as possible. Does this mean that the buffer size
needs to be kept small so that the callback function is called more
often? Let's say your original tempo is at 120 bpm, so at 44.1 khz,
that's a beat starting every 22500 samples. Now, if the callback
function asks you for one second of audio at a time, that's going to
be two metronome "clicks" already passed on and ready to play. In
this case, if the user changes the tempo, then depending on when in
the cycle they change, you might have to wait almost a full second
before the "clicks" with the new timing reach the output. Is this a
proper understanding of the way this works? If so, then it seems
that the callback does need to occur much more often in order to
keep the process flexible and responsive. How would you achieve
this, and what is the right balance between stability and
flexibility, when it comes to buffer size/callback frequency?
For my experience, I have a buffer of 256 or 512 samples. Thus, the
callback is called to fill this number of samples. When I change the
bpm, it is reflected immediately (512/44100) sec.
My second question also has to do with the callback function. The
way I understand it, buffers in the Audio Queue Services system
operate on a rotating/ring system. Usually one buffer playing, one
ready to play, and one filling up. From the documentation, it sounds
like there is no absolutely definite timing to the process of
filling the buffers in the callback. If the system needs more audio
soon, it will ask when it can. However, it seems like in the case of
low-level Core Audio and RemoteIO, the callback IS called at very
specific intervals. Is this true?
Yes this is what I have. As said before, the callback is called each
time it needs samples. Depending on the buffer size you choose, the
callback will be called more or less often. But it is regular.
Regarding thebsync with the UI, I set a variable in the call back that
reflects the current time. In the main thread, I use a Timer that
checks this variable (20 times/sec) in order to sync with audio. There
might be better strategies...
By the way, even if this is not audio, for those who already did it,
if you want to have a kind of matrix representing the ticks/beats/bars
on the x axe and the tracks/samples on y, matrix in which each
"square" represents a tick "clickable" by the user to modify the
sequence, what would you suggest as the best strategy? Buttons?
Coordinates calculations in the view to know where the user clicked
and see to which "square" it corresponds in the matrix?
For example, is the time between callback execution and the first
sample "hitting" the speaker always the same for a given audio
session (sample rate, AUGraph setup, etc.)? If so, then I can see
how you can use that to time interface updates that correspond to,
say, the metronome clicks - you know when you passed of the audio
samples, so you can calculate when they will be heard. If the
process doesn't occur at absolutely regular intervals however, what
is the best way to keep the UI in sync?
Thanks,
Ryan _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden