Re: Latency Issue
Re: Latency Issue
- Subject: Re: Latency Issue
- From: <email@hidden>
- Date: Thu, 29 Jan 2004 21:26:45 -0500
Hi Jeff,
The hardware playthrough was just a hardwire connection so it has no latency.
For the software playthrough testing i have done. I used my audio driver as input device and the build-in audio driver (system speaker) as an output device. I have found that the audio come out from the system speaker was around 300 msec behind the audio comes from the hardware playthrough.
From what i understand, that 300 msec delay would comes from :
Input Latency + Input Safety Offset + (2* IO buffer Size) + Output Safety Offset + Output Latency
therefore, most of the delay comes from the IObuffer size then? and the way to minimize the latency is the adjust IObuffer size only?
Can you give me more detail info on how the core audio calculate the current engine location based on the takeTimeStamp() call, latency and safety offset?
Also for the input latency parameters specified in the setInputSampleLatency(), does it means the hardware latency for passing through the converter and before DMA process?
thanks
Eric
-----Original Message-----
From: Jeff Moore [
mailto:email@hidden]
Sent: Thursday, January 29, 2004 8:37 PM
To: CoreAudio API
Subject: Re: Latency Issue
I don't know how the electrical connections in your hardware is set up,
but in most devices the hardware play through is a direct pass through
connection from the input to the output that happens before the
converters which yields essentially no latency.
In software play through, the latency (in sample frames) is determined
by the following formula (presuming the play through is happening on
the same device in the same IO proc):
Input Latency + Input Safety Offset + (2* IO buffer Size) + Output
Safety Offset + Output Latency
Even if all these factors but the IO buffer size are 0 (which is
technically impossible), it is still a lot larger than 0. Even if you
card has latency along the playthrough path, this is still probably
larger.
On Jan 29, 2004, at 4:08 PM, email@hidden wrote:
>
Hi all,
>
>
I have a question about the latency issue. Here is the situation:
>
>
I am developing an audio input driver for a PCI device. Due to the
>
hardware limitation, the dma transfer to the audio stream buffer 4K at
>
a time and the currentSampleFrame will be updated accordingly. I have
>
the ring buffer 16K large so there would be 4 DMA before reaching end
>
of the ring buffer. When the last DMA chunk finished by a hardware
>
interrupt callback, the takeTimeStamp() will be called.
>
>
When i do a software playthru test to play back the captured audio
>
data, i found that there are significant latency happened to the audio
>
signal when i compare with the hardware playthru output from my input
>
device. I have looked into one of the archives
>
http://lists.apple.com/archives/coreaudio-api/2003/Jun/17/
>
fismixable.017.txt memtioned here, but seems like it didn't solve the
>
problem. changing the setSampleLatency() value didn't seems to help.
>
So what's wrong with my setup and which area i should look into to fix
>
it?
>
>
I also found that if i do software playthru using Sequence Grabber
>
from QT, it would have a much larger latency when playback. Is there
>
something i missed in the setup? How can i correct this problem?
--
Jeff Moore
Core Audio
Apple
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
1
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.