• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: render callback timing
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: render callback timing


  • Subject: Re: render callback timing
  • From: Josh Anon <email@hidden>
  • Date: Fri, 05 Mar 2004 15:09:02 -0800

Ok. I'm with you here. You're trying to get the presentation time of
the data. It sounds like you are doing output, that is, you want to
compute when the output data will get to the speaker, right?
yep

In configure, I ask for the device and stream latencies, and I
register an observer for if they change. At the end of the render
callback, I do this:

_nextPlayTime =
AudioConvertHostTimeToNanos(((AudioTimeStamp*)inTimeStamp)-
mHostTime);

If you are doing output, you are using the wrong time stamp. You would
want to base your calculation on the output time stamp, the
inOutputTime parameter in your IOProc. inInputTime is the time stamp
for the input data.
Hmm, I'm using AudioUnits per one engineer's suggestion to my colleague who did the initial CoreAudio analysis, and the kAudioUnitProperty_SetRenderCallback property, and there's only one time stamp in the callback (args are void*, AudioUnitRenderActionFlags*,AudioTimeStamp*,UInt32,UInt32,AudioBufferLis t*).

In the GetNextPlayTime method, I convert the device + stream latencies
to nanoseconds, I get the nanoseconds per tick, I add the next play
time + latency, and I return the ticks.

Hopefully you aren't converting kAudioDeviePropertyLatency to
nanoseconds using the nominal sample rate. This will put you off by
whatever the true sample rate of the device is. You should be using the
HAL's time conversion routines to move around in the device's time
base, or alternately using the rate scalar value in the AudioTimeStamp
passed to your IOProc.
In my config routine, I query the device and stream latencies, and I store those. I register an observer to re-get them should they change. I was using the nominal sample rate, but for the HAL conversion routines, I see AudioDeviceTranslateTime(AudioDeviceID, const AudioTimeStamp*, AudioTimeStamp*), but in the AudioTimeStamp struct, I don't see a way to deal with frames of audio (which is kAudioDevicePropertyLatency returns, correct?). Again, though, if I assume the latency value to be 0 and just use the time from the render callback, my test still gives crazy numbers.

That difference seems to be absurdly off. My guess is that you have
some bad assumptions in your code somewhere. The HAL/IOAudio family on
OS X is unlike anything on Linux.
I agree with you, although this code is really simple with very few assumptions. It's basically write into a ring buffer, blocking if needed, and feed core audio from this buffer in the callback, blocking if needed.

Also bear in mind that the HAL is tracking the true rate of the
hardware and that gets updated constantly so any prediction you are
making that doesn't take this into account (by going through the HAL's
conversion routines) is going to be off.
It sounds like you're saying I should just rework everything to use the HAL directly. Is that the case, or is this type of thing do-able with AudioUnits?

Thanks,
Josh


---
Josh Anon
Studio Tools, Pixar Animation Studios
email@hidden
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


  • Follow-Ups:
    • Re: render callback timing
      • From: Jeff Moore <email@hidden>
  • Prev by Date: Re: render callback timing
  • Next by Date: Re: render callback timing
  • Previous by thread: Re: render callback timing
  • Next by thread: Re: render callback timing
  • Index(es):
    • Date
    • Thread