• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Use of AudioQueue buffers in case of sound synthesis
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Use of AudioQueue buffers in case of sound synthesis


  • Subject: Use of AudioQueue buffers in case of sound synthesis
  • From: "cparodi.ugemi" <email@hidden>
  • Date: Sat, 30 Jan 2010 15:35:36 +0100

Following advice from Jeff I started prototyping a simple callback sound generation test program based on AudioQueue services so I can avoid having to deal with lower level API's (as I just need to deal with the default output device and I don't need AU's).

One problem I am facing is about the 3-buffer approach: do I really need to allocate three queue buffers and invoke my callback render function three times in a loop? Isn't that purely insane from a performance point of view (IMHO I was already struggling with performance with a single callback).

This means triplicating the processor workload per synth voice (think about a polyphonic synth...).

Any way to avoid that?

Regards,
Charles



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Use of AudioQueue buffers in case of sound synthesis
      • From: arab stab <email@hidden>
  • Prev by Date: Re: How to get the number of bytes per packet of the default
  • Next by Date: kCFRunLoopCommonModes
  • Previous by thread: Re: How to get the number of bytes per packet of the default
  • Next by thread: Re: Use of AudioQueue buffers in case of sound synthesis
  • Index(es):
    • Date
    • Thread