Re: Sync audio tracks
Re: Sync audio tracks
- Subject: Re: Sync audio tracks
- From: William Stewart <email@hidden>
- Date: Mon, 3 Nov 2008 12:39:45 -0800
the timing information you get from the audio queue allows you to
exactly align - at a sample boundary, even with compressed audio - the
output of any given audio queue.
the audio queue is not arbitrarily shifting data around
to get you started, the best way to do this is this:
start two queues
schedule buffer commands for say a second in advnace on each queue
(using the SAME time value) using just the host time value in the
audio time stamp you provide with the schedule buffer command.
once you start them together, then any consequent buffer can just be
enqueued using the simple enqueu buffer (as it will abut against the
previous buffer)
A good way to test this is to have one queue playing the same file but
invert the sample values around zero. You should hear silence. If you
then offset the samples by a small amount you will here a gradual
introduction of differences
On Nov 1, 2008, at 8:04 AM, Savvas Constantinides wrote:
I have managed to play sounds using the AudioQueue services. I can
even play two or more sounds at the same time using one or several
audio queues but I have a problem trying to sync them together. I
need to sync the second track to the first while the first is
already playing.
Just to make this clear I am not trying to sync different audio
files or files with different bpm now I am actually trying to sync
the same audio track.
What I have tried until now is get the AudioTimeStamp of the
hardware device so I know the exact sample playing at the moment I
press play on the second one. I then fill the buffers of the second
AudioQueue and play it immediately. Unfortunately although really
close in time the two tracks are not really in time and their time
difference is not a constant value. It varies a bit and creates a
phasing effect. I am wondering how can I overcome this problem. Is
it due to the latency of the hardware? If it is then the time
difference should be a constant value so there must be something
else there. Is it maybe because of too many calculations (I doubt
that)?
Something else I tried was to use only one AudioQueue and multiple
audio files and mix the samples in the callback function. There is a
noticable delay there ofcourse because I am actually filling the
buffer that was just played which means there will be at least 2
times the buffer length delay until the second file goes to the
speakers.
I think the first approach is better but I am missing something.
Any ideas, clues??
Savvas
Get the best wallpapers on the Web - FREE. Click here!
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden