• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Trying to port my app to Core Audio. Audio Sync
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Trying to port my app to Core Audio. Audio Sync


  • Subject: Re: Trying to port my app to Core Audio. Audio Sync
  • From: Jeff Moore <email@hidden>
  • Date: Mon, 2 Feb 2004 19:24:41 -0800

One knows when the data is going to hit the wire from the time stamps the HAL provides and the latency figure. The time stamps say when the HAL/driver/DMA are done with the data and the latency figure is a constant that is added to indicate the latency in the hardware after that. For lots of devices, the latency is in their DACs. The higher the quality, typically the more latency.

The HAL provides routines for accessing the device's time base (ie. AudioDeviceGetCurrentTime and AudioDeviceTranslateTime). The HAL also provides time stamps to IOProcs indicating where that particular buffer is in the device's time line.

So, to get an estimate of when a given sample frame is going to hit the wire, were it to be placed in the driver's buffer immediately, you'd call AudioDeviceGetCurrentTime to get the current time and then add the hardware latency to that.

That said, it isn't possible to queue a sample frame immediately. You will be doing that through an IOProc of some kind (your own or via the HAL Output AU). The time stamps in the IOProc establish the basis function on when the sample frames are delivered to the hardware (that is, the opportunities are limited but come at regular intervals), so you will have to account for it's progress as well as it's flight characteristics. The amount of time from when your IOProc is called to when the data will hit the wire is the output time stamp plus the hardware latency. In relative terms, this works out to being the length of the IO buffer plus the safety offset plus the hardware latency.

Finally, the buffer size the HAL allows for is up to the application. You can use kAudioDevicePropertyBufferFrameSizeRange to get the bounds. The default buffer size is 512 (which is ~11.6 milliseconds at 44100). You change it using kAudioDevicePropertyBufferFrameSize.

On Feb 2, 2004, at 6:42 PM, Ralph Hill wrote:

Thanks James, I will look further into the audio output unit. It may be easier to use than the HAL.

Based on a conversation with someone else, it occurs to me that I did not make something clear.

Latency from when sound data is presented to the audio software, and when it is heard, is not an issue for me. The Play method I am trying to write queues audio to be played behind other audio that has been queued by earlier calls. I don't care (much) how long it takes to get to the output jacks. But the Play method must be able to return an estimate of when the sound just queued will first reach the output jacks. I failed to make the queuing aspect of my Play method clear, and I think you may have assumed that each call to Play() is to play the associated sound as soon as possible, possibly overlapped with a previously delivered sound.

So, to make things work, I still need to know how to determine when my audio will reach the output jacks, and whether I can force the HAL to use buffers of ~10msec (or less), or if it does by default. So far I have been unable to determine either from the documentation or the .h files. Could you shed any light on how to find answers? For the latter, I can always write a test program, although that seems a rather time consuming way to find out something that should be in the documentation. Measuring the former experimentally would be a lot work. I am hoping that you already have the necessary information from the audio device manufacture and from the device driver.

ralph

On Feb 2, 2004, at 6:01 PM, James McCartney wrote:


On Feb 2, 2004, at 5:36 PM, Ralph Hill wrote:

I have no strong reason to not use the output audio unit. But I don't see any advantage to the output audio unit. I looks to me like I have the same three issues with the output audio unit that I have with the HAL:
1. I have to add a ring buffer to match a push model application to a pull model library.
2. I have to force the pull model library to use small buffers (~10msec)

10 milliseconds is not small.
Anyway this doesn't make much sense. You can either have a push model or low latency. Choose one or the other. Only the hardware knows where it is in the data and the HAL has sophisticated clock algorithms for chasing it. If your software wants to push however much it wants whenever it wants as you stated, then you will not do as well. Pushing is great for things like playback of pretimed or prerendered data. But if you need low latency then pushing is not the best model.

3. I have to have a way of estimating when the sound gets to the output jacks.

Is it easier to do these three things in the output audio unit than the HAL? I have looked and I don't see an easier way than for the HAL. Can you suggest an way to use the audio output unit to get what I need?


Yes you still need a pull model.
The output unit does a lot of other work automatically tracking the hardware state for you which you will have to do if you use the HAL.

-
james mccartney
apple coreaudio
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.



--

Jeff Moore
Core Audio
Apple
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

References: 
 >Trying to port my app to Core Audio. Audio Sync (From: Ralph Hill <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: James McCartney <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: Ralph Hill <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: James McCartney <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: Ralph Hill <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: James McCartney <email@hidden>)
 >Re: Trying to port my app to Core Audio. Audio Sync (From: Ralph Hill <email@hidden>)

  • Prev by Date: Re: Trying to port my app to Core Audio. Audio Sync
  • Next by Date: Re: efficient device channel enabling - was Re: fIsMixable
  • Previous by thread: Re: Trying to port my app to Core Audio. Audio Sync
  • Next by thread: AudioUnit's outDeltaSampleOffsetToNextBeat.
  • Index(es):
    • Date
    • Thread