Re: Mutitimbral - philosophy
Re: Mutitimbral - philosophy
- Subject: Re: Mutitimbral - philosophy
- From: Brian Willoughby <email@hidden>
- Date: Thu, 17 Jul 2003 23:00:34 -0700
[ I think publishing voice utilization is exactly the stuff that
[ would make things more complicated.
[
[ a) Why should a Music Device be limitted to the concept of voices?
[ I.e. a setting where 3 parts play back samples, a fourth part
[ being a Vocoder or granular device that scrambles parts 1-3.
Voice utilization was merely an example of one way to control CPU utilization,
not something that I saw as a requirement for all Music Devices. As I thought
about this more, I realized that it could be very difficult for the host app
to manage CPU utilization unless each MD presented a very generic "quality"
parameter.
I think we have a situation where there is no single, clear solution. There
are a few options, but each has its disadvantage:
1) Each MD completely manages its own CPU utilization. The host can do
nothing. Users are then forced to learn each MD in detail and hand tweak the
settings for performance.
2) Each MD presents a very generic "quality" parameter. The host can do a
little with this. Users, and particularly developers, will be confused when
they see this, because it may not be apparent immediately what exactly will
change when this parameter is altered. As evidence, I've seen many developers
on this list who were confused by the way that Apple's units change reverb
quality based on CPU usage. I think it is a Catch-22 where maybe the middle
ground is the best compromise.
3) My rough, but incomplete, idea that there might be some way to present CPU
utilization to the host in a more detailed fashion such that the host could
manage balancing load between different Music Devices without requiring the
user to hand tweak each one and see how that affects everything running in
their setup. Even if this is a good idea, it's certainly easier said that
done. You've already found some serious problems.
I really just wanted to avoid jumping the gun. The idea was presented that
multi-timbral Music Devices could better manage CPU utilization because a
single unit would be aware of how to balance its many parts. That's a great
idea, but my first reaction was that this "feature" of a multi-timbral MD does
nothing to help users balance the interaction between different MDs. We need
something for that, too, and a good solution may make the point moot that
multi-timbral units can better manage load balancing.
[ See, for simplicity sake, the host should need to know as little
[ as possible about any MD's internal workings. IMHO this is the
[ point that provides for flexibility.
True. The Object-oriented way is to hide all details. However, each MD uses
part of the CPU bandwidth, so it would be nice to manage that globally, too,
rather than by diving into each MD manually.
My rough draft was flawed. It focused too much on voice allocation as a CPU
load factor. If it is possible to describe load in more general terms, then I
think that would be a powerful feature for a host to automatically manage load
between multiple MDs using generic parameters. It seems useful to have a pair
of parameters, one which shows the current value of the "load" factor, and
another which sets a top limit so that the MD will not use too much CPU. Any
ideas on how to do this generically and centrally from the host so that it is
more user friendly than diving into each MD manually?
Brian Willoughby
Sound Consulting
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.