Re: render callback timing
Re: render callback timing
- Subject: Re: render callback timing
- From: Jeff Moore <email@hidden>
- Date: Fri, 5 Mar 2004 11:30:23 -0800
On Mar 5, 2004, at 9:36 AM, Josh Anon wrote:
(first try was posting from the wrong address--sorry if you get this
twice)
Hopefully you are prepared to get an error when you do this since some
devices require buffers to be a power of 2 or some other even number.
interesting, I guess the default output device doesn't. I switched it
to frames anyway, and now it is being called back when I expect.
Thanks.
IOAudio-based drivers don't have this restriction but other kinds of
drivers, like Digidesign's current implementation, do.
In the callback, I also store the next play time in nanoseconds
(AudioConvertHostTimeToNanos) and, when asked, add that to device +
stream latency in nanos and return it as the next play time.
I'm not sure I follow what you are trying to accomplish here. What
time
stamps are you using and for what purpose?
The idea is that the guy using this object can ask for the next play
time in terms of system ticks--as close as possible to when the user
will hear the sound.
Ok. I'm with you here. You're trying to get the presentation time of
the data. It sounds like you are doing output, that is, you want to
compute when the output data will get to the speaker, right?
In configure, I ask for the device and stream latencies, and I
register an observer for if they change. At the end of the render
callback, I do this:
_nextPlayTime =
AudioConvertHostTimeToNanos(((AudioTimeStamp*)inTimeStamp)-
>mHostTime);
If you are doing output, you are using the wrong time stamp. You would
want to base your calculation on the output time stamp, the
inOutputTime parameter in your IOProc. inInputTime is the time stamp
for the input data.
In the GetNextPlayTime method, I convert the device + stream latencies
to nanoseconds, I get the nanoseconds per tick, I add the next play
time + latency, and I return the ticks.
Hopefully you aren't converting kAudioDeviePropertyLatency to
nanoseconds using the nominal sample rate. This will put you off by
whatever the true sample rate of the device is. You should be using the
HAL's time conversion routines to move around in the device's time
base, or alternately using the rate scalar value in the AudioTimeStamp
passed to your IOProc.
The test I'm using waits until everything is finished playing, gets
the current time (in ticks) and adds a small interval. Then, in a
different object that can play silence, it gets the current time and
the predicted next play time. It finds the difference between the
two, and it inserts silence for that many ticks. Next, it gets the
predicted next play time again and takes the difference between those
two. On Linux, this difference is very small--< 2 audio frames. On
the Mac, it's varying wildly from 50 - 3000 audio frames, even once
I've confirmed that my buffer is called every 10ms. I'd be happy if
this was just my doing something stupid, but I'm not sure what.
That difference seems to be absurdly off. My guess is that you have
some bad assumptions in your code somewhere. The HAL/IOAudio family on
OS X is unlike anything on Linux.
Also bear in mind that the HAL is tracking the true rate of the
hardware and that gets updated constantly so any prediction you are
making that doesn't take this into account (by going through the HAL's
conversion routines) is going to be off.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.