• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: render callback timing
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: render callback timing


  • Subject: Re: render callback timing
  • From: Jeff Moore <email@hidden>
  • Date: Fri, 5 Mar 2004 15:32:39 -0800

On Mar 5, 2004, at 3:09 PM, Josh Anon wrote:

In configure, I ask for the device and stream latencies, and I
register an observer for if they change. At the end of the render
callback, I do this:

_nextPlayTime =
AudioConvertHostTimeToNanos(((AudioTimeStamp*)inTimeStamp)-
mHostTime);

If you are doing output, you are using the wrong time stamp. You would
want to base your calculation on the output time stamp, the
inOutputTime parameter in your IOProc. inInputTime is the time stamp
for the input data.
Hmm, I'm using AudioUnits per one engineer's suggestion to my colleague who did the initial CoreAudio analysis, and the kAudioUnitProperty_SetRenderCallback property, and there's only one time stamp in the callback (args are void*, AudioUnitRenderActionFlags*,AudioTimeStamp*,UInt32,UInt32,AudioBufferLi st*).

Never mind. I thought you were using the HAL directly where there are three time stamps passed to your IOProc, including one named inInputTime, which specifically refers to time for the input data since the HAL delivers both input and output simultaneously.

In the GetNextPlayTime method, I convert the device + stream latencies
to nanoseconds, I get the nanoseconds per tick, I add the next play
time + latency, and I return the ticks.

Hopefully you aren't converting kAudioDeviePropertyLatency to
nanoseconds using the nominal sample rate. This will put you off by
whatever the true sample rate of the device is. You should be using the
HAL's time conversion routines to move around in the device's time
base, or alternately using the rate scalar value in the AudioTimeStamp
passed to your IOProc.
In my config routine, I query the device and stream latencies, and I store those. I register an observer to re-get them should they change.

That's good.

I was using the nominal sample rate, but for the HAL conversion routines, I see AudioDeviceTranslateTime(AudioDeviceID, const AudioTimeStamp*, AudioTimeStamp*), but in the AudioTimeStamp struct, I don't see a way to deal with frames of audio (which is kAudioDevicePropertyLatency returns, correct?). Again, though, if I assume the latency value to be 0 and just use the time from the render callback, my test still gives crazy numbers.

The mSampleTime field of the AudioTimeStamp is the sample time. AudioDeviceTranslateTime is all about taking in one representation of time and converting it to another. You set up the inTime with the sample time and set the flags of the outTime to say you want it in host time. It is vital to set the mFlags field correctly in the AudioTimeStamps

That difference seems to be absurdly off. My guess is that you have
some bad assumptions in your code somewhere. The HAL/IOAudio family on
OS X is unlike anything on Linux.
I agree with you, although this code is really simple with very few assumptions. It's basically write into a ring buffer, blocking if needed, and feed core audio from this buffer in the callback, blocking if needed.

But how do you run the timing of this stuff and how does that relate to what you put in the IOProc? Needless to say, the devil is in the details here.

Now that I know a little bit more about what you are trying to do, kAudioDevicePropertyLatency does not apply to the math you are doing since you are really just grabbing a point in time from the IOProc, offsetting it, and then looking for the new time in another IOProc. Adding the latency value will make you late, since presentation time isn't really what you need to work with here.

Also bear in mind that the HAL is tracking the true rate of the
hardware and that gets updated constantly so any prediction you are
making that doesn't take this into account (by going through the HAL's
conversion routines) is going to be off.
It sounds like you're saying I should just rework everything to use the HAL directly. Is that the case, or is this type of thing do-able with AudioUnits?

No. AUHAL is a fine platform for what you are doing. I just didn't realize from what you had written that this was what you were doing.

One thing that occurred to me is that AUHAL doesn't directly pass the HAL's time stamps to it's clients. Instead, it passes time stamps that have been massaged so that they get zeroed when things are started. I'm not entirely clear on what AUHAL is doing to the time stamps, so perhaps Doug could provide a clearer picture of how the time stamps you see in the render callbacks of AUHAL relates to the time stamps the HAL passes to AUHAL's IOProc.

--

Jeff Moore
Core Audio
Apple
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


  • Follow-Ups:
    • Re: render callback timing
      • From: Doug Wyatt <email@hidden>
References: 
 >Re: render callback timing (From: Josh Anon <email@hidden>)

  • Prev by Date: Re: render callback timing
  • Next by Date: Re: render callback timing
  • Previous by thread: Re: render callback timing
  • Next by thread: Re: render callback timing
  • Index(es):
    • Date
    • Thread