Re: Changing Default Device and Sample Rate via Objective C
Re: Changing Default Device and Sample Rate via Objective C
- Subject: Re: Changing Default Device and Sample Rate via Objective C
- From: "René J.V. Bertin" <email@hidden>
- Date: Mon, 01 Oct 2012 17:13:57 +0200
On Oct 01, 2012, at 16:35, email@hidden wrote:
>
> Thanks for the tip Rene. I will check out DefaultOutputUnit. What is your iTunes plugin called? I'd like to check it out. To clarify, I don't feel obliged to use Objective C, I would just rather not learn two new languages at once. However, in retrospect, Objective C is probably more difficult to learn than C++ anyway since I have a C# .NET background.
My plugin isn't (yet) publicly available, would you be interested in beta-testing?
ObjC is definitely easier to learn than C++, it's got a much more restricted (and readable) vocabulary, and is much more an extension on top of C than C++ is. Rename a .cc file to .mm and you've got a start with ObjC++ (though without using a single ObjC feature ^^)
I wasn't aware DefaultOutputUnit is deprecated, but am not surprised. Some day soon we'll get that kind of message on printf, fork or ioctl :-/
>
>> This issue crops up fairly regularly. The problem with any application changing the device sample rate is that other applications may also (concurrently) be trying to do so as well, an easy and obvious possibility of contention. Ostensibly, this task should be left expressly to the user to do, globally, externally, through whatever control panel is available. These days, it is arguably inappropriate for any developer to assume or expect that their application is the only one running.
>>
>> Richard Dobson
>
> I understand and respect what you are saying. However, in certain cases, this simply cannot be avoided. For audio purists, converting the sample rate for music/video is extremely undesirable. In my particular case, the software running should be assumed to be the only software running as it is a HTPC app. Encoded DTS and Dolby Digital audio cannot be passed to an external decoder without the hardware sample rate being set appropriately.
Couldn't agree more (with Mike, not with Richard who seems to echo Apple on this), and I was going to post about that myself. Lesson nr 1: never under-estimate your users or consider you know better. Those are older and more generic than taking concurrency into account. Sure, not everyone will miss the possibility to have an automatic way to optimise their sound output quality, but if you ask them (the ones who don't simply listen over the built-in speakers or a pair of Apple earbuds ;) ) I'm sure they won't reject such an option (and even if your whole music collection is at 44.1kHz, online radios are often at 48kHz - why accept to listen to those at inferior quality or to go do some tweaking by hand?). So you give them a choice. Of course, if 2 apps want to send different sound waves to the output device a conflict can occur. But what's the problem with that? Can anyone hope to listen to more than one high quality sound output at a time? Anyone knowledgeable enough to mix sounds from different apps, live, is undoubtedly savvy enough to set up a processing chain or use AppleJack. For all the other apps and cases, they're already conceived to work with whatever output sample rate the user configured, so if they change behind their back there's no issue at all. At most a short silence when the hardware reconfigures.
But sure, it's a bit problematic that OS X doesn't provide mechanisms like those on MS Windows that allow apps to 'lock' an output device through WASAPI (which causes other apps either to use a different output channel OR to be muted).
René
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden