• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AUSampleDelay questions
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AUSampleDelay questions


  • Subject: Re: AUSampleDelay questions
  • From: William Stewart <email@hidden>
  • Date: Fri, 7 Apr 2006 11:28:35 -0700


On 07/04/2006, at 9:04 AM, Craig Hopson wrote:

In a post back on Dec 19, Bill Stewart mentioned a new-for-Tiger AU called AUSampleDelay.

1. I can't find any discussion about it specifically. I did load it into the AudioUnitHosting app. It does what the name says. I'm wondering how it works. Can I assume that it runs a ring buffer (or something) with the input side on one thread, and the Render (pull) on the CA thread? Does it return zeros in the Render buffer until delay time has passed? Can I then do my processing behind it on the backend thread? Is 1/10 sec the largest delay possible or is that only what the UI limits it to?

The delay time limit decides how much buffering is required, so we do limit this to that size.


It doesn't do threading, it just inserts a simple delay between input and output. If you want threading between input and output, then the 'aufc' (format converter) type AUDeferredRender is what you are after.


2. What makes this Tiger only?

We only add new features to new OS releases. We're not generally in the practise of updating older OS releases.


Does it rely on other Tiger-only parts of CoreAudio, or is it a portable component? That is, would it be possible (and legal) to bundle it with our app
Not easily possible, and entirely not legal. Please read the license agreement that comes with your Mac - the one that you have to click through when you install.

which will run on Panther? We would load it for our app, hook it into our AUGraph, and then delete when the app quits.

If you need the fuctionality, the AU is not terribly difficult to write - the SampleEffect Unit in the SDK could be used as a good starting point


Bill


Thanks. Craig Hopson Red Rock Software



_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >AUSampleDelay questions (From: Craig Hopson <email@hidden>)

  • Prev by Date: Re: CAAudiofile.h in CoreAudio Public Utility and other things
  • Next by Date: Re: schedule AU parameters with a mTimeStamp?
  • Previous by thread: AUSampleDelay questions
  • Next by thread: directing audio to different channels on a device
  • Index(es):
    • Date
    • Thread