Re: FW: Audio delay when using CoreAudio outputs with QuickTime under 10.2
Re: FW: Audio delay when using CoreAudio outputs with QuickTime under 10.2
- Subject: Re: FW: Audio delay when using CoreAudio outputs with QuickTime under 10.2
- From: Jeff Moore <email@hidden>
- Date: Mon, 21 Apr 2003 15:29:09 -0700
Your problem is likely due to the fact that the Sound Manager doesn't
provide sample accurate synch with compressed audio data. With MP3,
each packet of data has 1152 samples. That's the granularity with which
you can schedule the data for playback with the Sound Manager since it
only can deal with things in terms of whole packets. QT, and all Sound
Manager playback clients, inherit this limitation.
Speaking as a DJ (I spin drum & bass), I can say that 1152 frame
granularities for scheduling makes it nigh impossible to actually do
beat matching with any degree of confidence. It basically means you can
only drop in a new track every 26.1 milliseconds (assuming a 44100
sample rate). A typical d&b track spinning at 180 BPM, where each beat
spans roughly 333 and a third milliseconds, won't line up on those
intervals too often. You're left dropping the track and then nudging
the beats into place with pitch bend pretty much every time.
Your only solution is to take more control over the playback process
and decode the data yourself, so that you can schedule everything with
sample accurate precision.
On Monday, April 21, 2003, at 02:57 PM, Dave Addey wrote:
Hi,
I've recently released a DJ application for the mac, and I have a
problem
with my use of CoreAudio and QuickTime for multiple outputs.
In order for users to DJ properly, they need to be able to listen to a
song
(actually, an MP3) through two devices at the same time. The only way
I've
been able to achieve this is to create three identical MP3 audio
tracks in a
movie (two copies of the original), and assign a different output
device to
tracks 2 and 3. I set the volume of track 1 to zero.
(Aside: The reason for using 3 tracks rather than 2 is that setting the
output component of track 1 of a movie sets it for all tracks, for some
reason. So, I have to silence track 1, and then set the output
component of
tracks 2 and 3).
I'm using the SoundComponent sdev "aliases" for CoreAudio devices
(such as
the built-in sound output, and a Griffin iMic) available in 10.2, and
assigning these to the tracks of my QuickTime movie using
MediaSetSoundOutputComponent from the Quicktime API.
However, when I play the movie, sometimes the two tracks are slightly
out of
sync by a small but significant fraction of a second. I thought this
might
be due to using two different physical devices, but it happens even
when
both audible tracks are set to the same device. Further, actions such
as
changing the rate, volume and balance can increase this delay.
Stopping and
starting the movie syncs things back again (with the same slight
initial
delay).
It seems to be track 2 that gets ahead of track 3 when I press play.
But
sometimes they're in sync and there's no delay. I can't find out
what's
casing the delay, and it's not consistent.
My question is:
Can I somehow increase the sample buffer size (or whatever else I
could do)
to reduce this slight sync problem? Or is there a different approach I
should be taking? It's the only remaining known issue in my
application,
and it's very frustrating! Any help much appreciated.
Thanks,
Dave.
------------------------------------
Dave Addey
email@hidden
DJ-1800
Complete MP3 DJ solution for the Mac
http://www.dj1800.com/
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.