• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Use of AudioQueue buffers in case of sound synthesis
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Use of AudioQueue buffers in case of sound synthesis


  • Subject: Re: Use of AudioQueue buffers in case of sound synthesis
  • From: Brian Willoughby <email@hidden>
  • Date: Sun, 31 Jan 2010 15:50:14 -0800


On Jan 31, 2010, at 06:21, email@hidden wrote:
But in case of sound syhthesis. sound generation is fully delegated to a
callback render function with no file access, so there's no low speed
peripheral to keep into account (HD or whatever): in that scenario is there any
risk of hearing clicks or any other sort of audible problems if my queue has
got a unique buffer instead of three?
There are a couple of ways to use CoreAudio. In one approach, you could have just the AU output unit, and your application could provide the audio buffers just in time. In the queue approach, you have to send the audio buffers in advance, because the callbacks are telling you that the buffer is finished, not that it is needed right away.

The former approach requires time-sensitive code with fixed time- constraint priority. The queue approach is much easier because you don't have to worry about precise timing so much.


Reason why I'm asking is that I also need to reduce the latency time (time
between key press and sound start), so I'm concerned that a three- buffer
approach may add some sort of delayed start. Please forgive me for the non
deeply technical nature of my questions, I am starting to prototype something
and just want to decide the right approach beforehand.

I forget whether you're coding for iPhone or full OSX. I also forget whether iPhone offers all of the options described above that CoreAudio offers. But if you want the lowest latency, then you might want to try coding without the queue, if that's not too advanced for your experience level.


Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Re: Use of AudioQueue buffers in case of sound synthesis (From: "email@hidden" <email@hidden>)

  • Prev by Date: Re: AUNetSend/AUNetReceive
  • Next by Date: Simulating Vibrato With Sine Wave
  • Previous by thread: Re: Use of AudioQueue buffers in case of sound synthesis
  • Next by thread: kCFRunLoopCommonModes
  • Index(es):
    • Date
    • Thread