• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Channels and frames
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Channels and frames


  • Subject: Re: Channels and frames
  • From: "john smith" <email@hidden>
  • Date: Thu, 27 Oct 2005 16:23:28 +0200


I do just want to throw in here that I would encourage you to consider whether you really need to limit your effect to these configurations.


Thanks for your consideration. What you need to know is that what I'm creating is a wrapper to a geneic format.
The way we're doing it, the plug-in decides the number of channels (which can then, for multi-channel plugs, i.e. synths, be set in the interface).
This is to support all platforms. Some of them, for instance Cubase as I recall, tends to show all outputs, even if not used.


As for effect plug-ins I want to limit them to the configurations I mentioned, i.e. 1->1, 1->2 and 2->2 and maybe in some cases 2->1. This is for performance reasons. A dynamic channel count decreases performance too much IMHO.

Why on earth would it decrease performance? I mean, obviously processing 6 channels is going to most likely use more resources than processing 2. But I don't think that only allowing 2 somehow solves this problem.

Obviously not.

What decreases performance is not being able to optimise for the channel count expected, but having to write generic code which can handle any number of channels.

I'll admit however that the decrease is somewhat limited. My main concern is what I also mentioned, that it's "too generic". Generally speaking I believe in specialization in these matters. Generic interfaces tends to confuse matters a lot, and in the end it will result in unstable software.

But besides the performance thing, some questions arises which I guess could be called "semantic". For instance, say I have a reverb, and I get 3 channels, what do I do? With 2 channels the answer is obvious, and with 1 channel I *could* (even if I'd prefer not to) make it stereo, and mix it down the single channel.

What's not obvious is what you are talking about. What do you mean, "what do I do?"

A reverb presented with 2 output channels obviously should make a stereo mix. A reverb presented with, say, 3 output channels should... what?
That's what I mean. If I get 3 channels I wouldn't know what to do with them.


As I said, if you're dealing with no inter-channel dependencies, than you just process more channels the same as you processed the first couple.

So, what you're saying is that the 3rd channel, in my example above, is an extra mono channel independent from the first 2?
Or is the first channel independent from the second 2? Or?
And how would I know which of the 3 channels is independent? How could I even assume that 2 of them is a stereo pair and it's not 3 independent mono channels?


That is what I mean, and this is my main concern.

If you do have inter-channel dependencies, then you decide if the design is scalable. If not (failing the first 2 possibilities), then you limit your channel configurations. There's nothing about reverb's basic nature that limits you to stereo, so I really don't understand your question.

Hopefully what I wrote above answerrs the question.

If you are talking about some specific reverb algorithm that does some special stereo field stuff and that is integral to its algorithm, then that would fall under the category of inter-channel dependencies that can't be scaled. So sure, there are cases like that, but more often than not, that's not a real limitation.

Sorry, but I will have to disagree. It seems to me that your approach can only be used for most filters, and a few other effects.


Consider these effects (which all of them have "trouble" with 3 outputs, see above):

Reverb: Has stereo outputs.
Chorus/Flanger: Might have a "width" control, which controls the stereo field. I.e. stereo output
Phaser: Same as above
Compressor: May use a single volume detector for stereo channels. Or may be a stereo compressor (converts l/r into m/s).
Limiter: Same as compressor



Except for EQ, these are problem the most normal effects to use.



So, honestly I feel that such a system is not really too useful. I don't automatically support surround streams, just by adding extra channels.

Yes it does. Surround streams have more than 2 channels. If you can support any number of channels, than you can support processing any type of surround stream. I can't figure out why you might think that that's not true?

I honestly know very little of surround. But, if I get 6 streams, wouldn't that be 5.1, with the .1 being sub? And shouldn't I *not* do reverb in that?


But anyway, let's not talk surround, because I know next to nothing about it, so I really cannot get into that. For instance, if I have a stereo compressor (splitting left/right signals into mono/stereo signals), I wouldn't know if I should do the same with the front speaker set and the back speaker set, or how they relate to each other in general.


Greets,

Michael Olsen




Marc

_________________________________________________________________
Don't just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Channels and frames
      • From: Brian Willoughby <email@hidden>
References: 
 >Re: Channels and frames (From: Marc Poirier <email@hidden>)

  • Prev by Date: Re: Channels and frames
  • Next by Date: Re: Input level meter
  • Previous by thread: Re: Channels and frames
  • Next by thread: Re: Channels and frames
  • Index(es):
    • Date
    • Thread