Re: playing audio files separated by specified time intervals
Re: playing audio files separated by specified time intervals
- Subject: Re: playing audio files separated by specified time intervals
- From: Maissam Barkeshli <email@hidden>
- Date: Mon, 22 Dec 2008 00:51:37 -0800
Thanks for the help.
I'm still a little bit confused though about how this specifically
should work. Say I want to do something simple: play a single audio
file once per second and simultaneously update a visual counter on the
GUI.
Right now I'm looking at starting a CoreAudioClock, running a loop and
continuously polling the clock to see whether a second has passed
since the last time. Once at least one second has passed, then play
the file and update the display. The problem with this is that even
though CoreAudioClock is very accurate, the program itself sometimes
might actually take an extra fraction of a second if the computer is
busy with other applications.
But people still develop rock-solid metronome applications, with
displays and everything. How do they do it? The only thing I can think
of about making this work is that somehow I need to be using a thread
that has extremely high priority, so that the operating system focuses
on this particular program.
Perhaps I have settle with not being able to get the display to be
perfectly accurate, but with your suggestion I can get the audio to be
accurate?
On Dec 22, 2008, at 12:22 AM, Brian Willoughby wrote:
Whether you are writing an AudioUnit, or simply generating audio in
an application and sending it to the default output, you have the
option of checking the very accurate time line provided by
CoreAudio. For each buffer requested, CoreAudio provides the time
stamp of the first sample in the buffer. If your program jots down
a reference point in time when the user presses Play, or any other
method you might use to establish a "zero" time, then you can decide
whether a given buffer should contain one of your audio files.
There are lots of conversion API to change the time stamps to the
time format that makes most sense to you. You will need to keep
track of the sample offset in each file, but I think AudioFIle or
ExtAudioFile would help with that.
If you don't want any overlap of audio, that's all you need to do.
If you do want to mix audio - which is what you would need to do if
any of those audio files would overlap - then you could build an
AudioGraph using one of the mixers, and provide a callback for each
mixer input. They'd all have the same timeline, but each audio file
would have a different starting point.
Finally, there might be an easier way to do what you want than what
I've described above - this is just the first thing that I
considered as an option.
Brian Willoughby
Sound Consulting
On Dec 21, 2008, at 21:24, Maissam Barkeshli wrote:
Hi, I'm new to the core audio API, I wonder if someone can point me
in the right direction here.
I'm trying to create a highly customized metronome, but I'm having
trouble with timing/accuracy/stability issues. I have a bunch of
different audio files that I would like to play in succession,
separated by specified time intervals. What is the best way to do
this? Having the program wait using some kind of sleep() function
doesn't seem to be accurate enough. If the computer is remotely
busy, or if the user decides to change windows to another
application, the timing goes off.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden