Re: Render Buffer Sizes Usually Consistent?
Re: Render Buffer Sizes Usually Consistent?
- Subject: Re: Render Buffer Sizes Usually Consistent?
- From: Brian Willoughby <email@hidden>
- Date: Tue, 10 Feb 2009 13:29:59 -0800
If I follow your description, you're basing your timing on offsets
from audio buffers, which leaves you vulnerable to any variations in
the number of buffer frames, however unlikely. Is there any way
instead that you could base your timing on the CoreAudio clock, and
set your sample offset when the audio buffer is rendered by comparing
the clock value delivered in the render callback? The CoreAudio clock
is not going to shift around, and the callbacks should provide a
reference to the corresponding clock for the current buffer.
Seems like you'll still have to estimate things based on imprecise
audio rates, but tying that estimate to the CoreAudio clock should be
more reliable than tying it to a sample offset in a particular buffer
(although it will get translated to that at the last minute).
Brian Willoughby
Sound Consulting
On Feb 10, 2009, at 10:02, James Chandler Jr wrote:
The RenderCallBack transmits MIDI to a MusicDevice via
MusicDeviceMIDIEvent(), calls AudioUnitRender(), and then generates
tempo-stretched multitrack audio. Then the RenderCallBack mixes
rendered MIDI with the multitrack audio and returns. The app does not
use Apple timestretch, sample converter or mixer AU's.
The MIDI seems to have good relative timing. When the app transmits
MIDI, it interpolates the expected time location of the MIDI packet to
an estimated sample offset into the 'next buffer that will get
rendered', and then places the event the proper place in the buffer by
using the OffsetSampleFrame parameter of MusicDeviceMIDIEvent(). This
gives a one-buffer latency to the MIDI, but good relative timing.
To account for possible edge cases-- it looks easy to micro-adjust the
MIDI tempo to agree with possibly off-speed audio hardware
samplerates, if the DefaultOutputUnit 'almost always' uses the same
NumberOfFrames buffer size in the RenderCallBack (after the
DefaultOutputUnit has been started and the DefaultOutputUnit has
decided whatever buffer size it wants to use).
However, if it is common for the DefaultOutputUnit to alter its render
buffer size 'on the fly'-- If different NumberOfFrames can get
requested at unpredictable times, it would introduce a variable MIDI
latency situation which would require smarter code. For instance, one
render requesting 2000 frames, the next render requesting 500 frames,
the next render requesting 3000 frames, whatever. My mechanism would
still have good relative MIDI timing, but the latency between MIDI and
audio could drift around.
Apologies for such a tedious message. Being naturally lazy, I don't
want to over-design the sync mechanism.
Does anyone know if I can count on the DefaultOutputUnit USUALLY doing
a RenderCallBack using the same NumberOfFrames render buffer size?
Occasional different-sized buffers wouldn't be much problem. A
'permanent switch' from one buffer size to another buffer size
wouldn't be much problem. Constantly-varying buffer sizes would be a
problem.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden