• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Render Buffer Sizes Usually Consistent?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Render Buffer Sizes Usually Consistent?


  • Subject: Re: Render Buffer Sizes Usually Consistent?
  • From: James Chandler Jr <email@hidden>
  • Date: Tue, 24 Feb 2009 12:54:52 -0500

Thanks for the reply, Brian. Sorry for not replying sooner.

Will try find time to take another look at the CoreAudio clock. Had read briefly about CoreAudio clock awhile ago, but (hopefully not whining too pitiful) have not seen EXTENSIVE documentation on CoreAudio topics, and at the time did not understand if CoreAudio clock would be a benefit. Am not expert.

Some trivia in case it is interesting...

Motivations for the current (possibly strange) playback scheme:

1. Avoid forking an un-maintainably large distance from similar Windows code.
2. Use the same playback mechanism whether sending to CoreAudio MusicDevice, CoreMIDI, or both.


It seems working well. Have not attempted metering, but timing is good enough that one might suspect that the CoreAudio render callbacks have very good low timing jitter. In this playback scheme, large render- time jitter might be easily audible in the playback timing.

MIDI Playback timing uses a legacy 1 ms resolution TMTask. Eventually to be replaced by a thread. The TMTask calculates the current MIDI Tick based on UpTime() according to the tempo map. The TMTask sends MIDI Events to either CoreMIDI or a private timestamped FIFO queue destined for CoreAudio MusicDevice. The MIDI Event timestamps in the 'private queue' are in sampleframe offset from the last AudioRenderCallback. The sampleframe offset is easy to calc given the NanoSec of the last AudioRenderCallback and the SampleRate.

Audio starts up when the program boots and runs constantly. When the user clicks the play button, the TMTask waits until the next CoreAudio Render callback before it starts MIDI Playback. Thataway, MIDI is always one render-buffer-size 'ahead' of the Audio.

The audio RenderCallback strips off the private timestamped MIDI event queue and sends to the MusicDevice immediately before rendering the MusicDevice. Then it renders tempo-aware audio tracks and mixes with the rendered MIDI.

The 'sync failsafe' implemented last week, which seems OK so far-- After the tempo-aware audio tracks have been rendered for a buffer-- Before the RenderCallback returns-- At this point in the code, the AudioTracks' TickLocation at the END of the RenderCallback, should ideally be very close to the MIDI TickLocation as measured at the BEGINNING of the RenderCallback (because of the one-buffer time offset between MIDI and Audio). IOW, after the current buffer has been rendered, ideally the audio location would have exactly 'caught up' to the MIDI TickLocation at the beginning of the RenderCallback

So if the two TickLocations differ by more than a few ms of slop, the code will gradually fine-adjust the MIDI Tick, so that the MIDI TickLocation can 'smoothly' catch-up or fall-behind to enter the audio TickLocation's slop window. Seems working.

To 'hopefully' account for Doug's suggested 'worst case scenario'-- If the user changes the samplerate, causing HAL to resample and call AudioRenderCallback with irregular buffer sizes-- I added a SmoothedBufferSize variable, using a first-order IIR lowpass filter to know the 'average' buffer size. Under normal condition the buffer size will always be the same, but the IIR calculation is cheap. In cases of wobbling buffer size, sync adjustment against the long-term-average buffer size may help. Have not yet tested whether this buys anything useful.

Apologies for a long trivia message.

James Chandler Jr.

On Feb 10, 2009, at 4:29 PM, Brian Willoughby wrote:

If I follow your description, you're basing your timing on offsets from audio buffers, which leaves you vulnerable to any variations in the number of buffer frames, however unlikely. Is there any way instead that you could base your timing on the CoreAudio clock, and set your sample offset when the audio buffer is rendered by comparing the clock value delivered in the render callback? The CoreAudio clock is not going to shift around, and the callbacks should provide a reference to the corresponding clock for the current buffer.

Seems like you'll still have to estimate things based on imprecise audio rates, but tying that estimate to the CoreAudio clock should be more reliable than tying it to a sample offset in a particular buffer (although it will get translated to that at the last minute).

Brian Willoughby
Sound Consulting


On Feb 10, 2009, at 10:02, James Chandler Jr wrote:

The RenderCallBack transmits MIDI to a MusicDevice via MusicDeviceMIDIEvent(), calls AudioUnitRender(), and then generates tempo-stretched multitrack audio. Then the RenderCallBack mixes rendered MIDI with the multitrack audio and returns. The app does not use Apple timestretch, sample converter or mixer AU's.

The MIDI seems to have good relative timing. When the app transmits MIDI, it interpolates the expected time location of the MIDI packet to an estimated sample offset into the 'next buffer that will get rendered', and then places the event the proper place in the buffer by using the OffsetSampleFrame parameter of MusicDeviceMIDIEvent(). This gives a one-buffer latency to the MIDI, but good relative timing.

To account for possible edge cases-- it looks easy to micro-adjust the MIDI tempo to agree with possibly off-speed audio hardware samplerates, if the DefaultOutputUnit 'almost always' uses the same NumberOfFrames buffer size in the RenderCallBack (after the DefaultOutputUnit has been started and the DefaultOutputUnit has decided whatever buffer size it wants to use).

However, if it is common for the DefaultOutputUnit to alter its render buffer size 'on the fly'-- If different NumberOfFrames can get requested at unpredictable times, it would introduce a variable MIDI latency situation which would require smarter code. For instance, one render requesting 2000 frames, the next render requesting 500 frames, the next render requesting 3000 frames, whatever. My mechanism would still have good relative MIDI timing, but the latency between MIDI and audio could drift around.

Apologies for such a tedious message. Being naturally lazy, I don't want to over-design the sync mechanism.

Does anyone know if I can count on the DefaultOutputUnit USUALLY doing a RenderCallBack using the same NumberOfFrames render buffer size? Occasional different-sized buffers wouldn't be much problem. A 'permanent switch' from one buffer size to another buffer size wouldn't be much problem. Constantly-varying buffer sizes would be a problem.


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Render Buffer Sizes Usually Consistent? (From: James Chandler Jr <email@hidden>)
 >Re: Render Buffer Sizes Usually Consistent? (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: Loop Audio on iPhone
  • Next by Date: Re: how to unregister interuptListener registered in AudioSessionInitialize() on iPhone
  • Previous by thread: Re: Render Buffer Sizes Usually Consistent?
  • Next by thread: Audio Queue Services
  • Index(es):
    • Date
    • Thread