Re: Input callback timestamp reset and current sample time
Re: Input callback timestamp reset and current sample time
- Subject: Re: Input callback timestamp reset and current sample time
- From: Jeff Moore <email@hidden>
- Date: Thu, 29 Sep 2005 18:46:19 -0700
Everything you say below says to me that you don't need to do any of
this time translation stuff. You don't even need to ask about the
current time at all. It is just overkill for just playing back a
stream of data without regard to anything else. All you need to do is
track how many samples you've played so that you can schedule
producing the next block. You do the scheduling of the production
from inside your callback from the output AU.
This is exactly what we do in our example tools like afplay. Heck, at
the most basic level, this is how apps like AULab work and even
frameworks like our OpenAL implementation. All they do is count
samples. No time translations or even an inkling of the current time
is necessary.
Is there some reason why you feel you need something more advanced
that you haven't mentioned yet? Otherwise, I think you really need to
re-asses what you are doing and boil it down to the basics.
You might just want to use a generator unit like
AUScheduledSoundPlayer or AUFilePlayer in a graph with the output
unit to handle your play back. That way all you have to do is
schedule buffers of your own making (or segments of audio files)
using your own time line. You wouldn't have to worry about the sorts
of things you are currently hung up on and can concentrate on
generating the data.
On Sep 29, 2005, at 6:04 PM, email@hidden wrote:
Just curious, but why are you not acting on the discontinuities in
the time line, such as when headphones are plugged in? The whole
reason that you see these is to let you know that samples were
dropped. I'd imagine that knowing this would be very important in
implementing synch so that you can skip media or whatever you need to
do in order to maintain synch.
I'm going in the other direction: playing samples based on an absolute
reference time. So if CoreAudio has to drop samples due to a device
reset, it's okay, but I still need to know where the sample count
would have been had it not dropped those samples. For example, with a
USB audio device, even it has to drop samples, I can still look at the
current frame number (which keeps on advancing) to know how much it
would have played so I know the correct frame number to insert the
next chunk of audio.
most quality engines I've run into don't
really care too about the exact values of the time stamps. Just
keeping track of how much has been played so far and how much is
needed to generate in the next cycle is usually enough to do just
about anything.
The exact values of the time stamps aren't important to me, but I need
the relative values between time stamps because I need to know how far
the sample time would have advanced. Even discontinuities in the
timeline are okay, but I need to know exactly when they occur in a
serialized manner so I can re-calculate my mapping between CoreAudio's
time and absolute time without a race condition between the periodic
time mapping update and the input callback. For example, if my
periodic mapping update occurs immediately before a device reset then
the input callback is called with the new timeline, my
CoreAudio->Absolute mapping will be stale until the next periodic
mapping update. If I knew the timeline would change immediately after
it changed, but before the next input callback was invoked, I could
fix my mapping and things would be okay.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden