• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: multi-channel AUs
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: multi-channel AUs


  • Subject: Re: multi-channel AUs
  • From: Bill Stewart <email@hidden>
  • Date: Sat, 29 Mar 2003 14:06:54 -0800

Jeff,

On Saturday, March 29, 2003, at 08:13 AM, Jeff DuMonthier wrote:
I guess no one important makes spacializers, ambisonics or channel mixing reverbs. This kind of design will certainly increase the demand for third party SDK and abstraction layers.

Umm.. well if you aren't going to count Apple - we do some of this in the AU3DMixer unit and are able to do this with non-interleaved buffers...

Bill

Maybe I am getting the wrong idea from this since I am just starting to delve into the examples and documentation, but doesn't expanding the set of allowed data formats (as compared to something like VST) and expecting all hosts and AU's to fend for themselves seem like a bad plan? A real problem with VST is that the SDK and scant documentation leave a lot of things unclear or unspecified. Practices evolve by consensus and testing with lots of plugs/hosts is always required.

One of the major functions of something like the AU SDK, IMHO, should be to provide a clean interface to negotiate formats between hosts/units along with optimized code to automatically do any necessary conversion (e.g. interleaving/deinterleaving). Yes, that is extra overhead and efficiency would be best served if everyone simply used the same format, but it would allow almost all hosts and units to work together without requiring every developer to include code to handle exceptions to the norm. It would also provide optimized conversion for those who might want interleaved mult-channel data. That is a very efficient format for vector processing using AltiVec and whatever it will be called on the 970.

It does - and i think you have some misunderstanding of what the base classes do for you in this area.

Yes, that is extra overhead and efficiency would be best served if everyone simply used the same format

Exactly, which is why we decided to have a cannonical format, and there is I think an entirely reasonable expectation that AU should support that..

The vote was so loud and clear that deinterleaved was preferred that we ended up revving the AU API to accomodate that.

The AudioConverter will do very efficient interleaving and deinterleving and for your case it might indeed be more efficient to interleave the buffer, do your processing on that, then deinterleave it on output.

If you look through the AUEffectBase render logic, there should be an easy way to plug this in - we might consider this as an alternative base class in the SDK if there is enough interest from other developers to provide support for interleaved processing.

Bill
-- mailto:email@hidden
tel: +1 408 974 4056

________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
References: 
 >Re: multi-channel AUs (From: Jeff DuMonthier <email@hidden>)

  • Prev by Date: Re: AUEffectBase and Stream formats
  • Next by Date: Re: AU Properties and Initialization
  • Previous by thread: Re: multi-channel AUs
  • Next by thread: Re: multi-channel AUs
  • Index(es):
    • Date
    • Thread