• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Newbie questions: How to implement an AU rendering CPU-expensive N->M effects
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Newbie questions: How to implement an AU rendering CPU-expensive N->M effects


  • Subject: Newbie questions: How to implement an AU rendering CPU-expensive N->M effects
  • From: Thorsten Karrer <email@hidden>
  • Date: Wed, 06 Oct 2004 23:00:49 +0200

Hello,

  I am completely new to AU implementation so perhaps someone can give
  me a little hint how to start on this:

  I want to develop a "real-time" time-stretching AU or say, as
  real-time as possible. The DSP stuff itself (some modified phase
  vocoder) is already running fine (but slow) within matlab.
  I read almost every bit of documentation Apple provides on
  implementing AUs, dug through the SampleAU example code but I'm
  already at a loss...

  First thing is that I need my AU to pull different amounts of
  samples depending on the current stretching factor. As far as I can
  tell, I can't do that within the effect kernel of the provided
  SampleAU class. I played aroud with the PullInput() method but I
  don't know if that would be the first choice to solve the problem.
  If someone could get me the source code of Apple's Varispeed AU or
  anything similar (they do parameter dependent pulling, I think),
  this would help me very much!

  Second thing is that I do not understand the threading of the whole
  rendering. Is there a preferred way to do the DSP of my AU in a seperate
  thread? The phase vocoder includes some pretty complex operations on
  the short time spectrum of the input signal (plus the FFT and iFFT
  of length >= 2048) so I suspect implementing this directly inside
  the effect kernel of the AU won't work.
  So, if anyone has some example code snippets of a working AU which in
  some way utilizes the STFT of the input signal and is still being
  more or less real-time, I would be really thankful.

  Any help or comments on this are appreciated! (Except "RTFM" - which
  I already did more than once...)

  Thanks very much for your help,

Thorsten

--
                          mailto:email@hidden


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Newbie questions: How to implement an AU rendering CPU-expensive N->M effects
      • From: William Stewart <email@hidden>
  • Prev by Date: Internet and sound
  • Next by Date: Re: Newbie questions: How to implement an AU rendering CPU-expensive N->M effects
  • Previous by thread: Internet and sound
  • Next by thread: Re: Newbie questions: How to implement an AU rendering CPU-expensive N->M effects
  • Index(es):
    • Date
    • Thread