I can’t come right out with a direct answer to your question, but perhaps I can give you some clues.
I presume the “Output” unit you use for input is the AudioDeviceOutput, aka AHAL, aka kAudioUnitSubType_HALOutput, although I expect the others are about the
same.
There’s a parameter named “volume” on the AHAL unit. (Bear in mind that parameters are not properties.) It ranges from 0.0 to 1.0 with a default value of
1.0.
I figure the “volume” parameter probably does, in fact, control the volume – but it’s anybody’s guess whether it controls the output volume, the input volume,
or both. Personally, I’d put my money on the output volume because it’s an “output” unit. As far as I can find, there is no other volume control (property or parameter) on the AHAL unit.
In case you want to see what parameters are available on other AudioUnits, the only way to find out is to write a program that
·
instantiates the unit in question,
·
queries its kAudioUnitProperty_ParameterList, and
·
queries kAudioUnitProperty_ParameterInfo to learn the range of values and default.
To get the ParameterInfo of a parameter p, you substitute p in place of the element/bus number in a call to AudioUnitGetProperty, thus:
err = AudioUnitGetProperty(unit, kAudioUnitProperty_ParameterInfo, scope, p, &info, &datasize);
This little API “stretch” would be reasonable, in my opinion, if Apple had bothered to document it. It works because the type AudioUnitParameterID just happens
to end up being the same as the type AudioUnitElement.
You can also go through the lower level audio hardware API (HAL) that’s found in AudioHardware.h. This is the API where you can find out what audio devices
are plugged into your system, which are the default devices, etc. This API works with “objects” instead of “units” but is very similar in style to the audio unit API. The differences I’ve found are that
(a)
you don’t instantiate anything – apparently these objects are always present so long as the corresponding hardware is present, and
(b)
there seem to be both an inheritance hierarchy and a containment hierarchy.
Anyway, there are some AudioObject properties like kAudioDevicePropertySubVolumeScalar and kAudioDevicePropertyVolumeDecibels. I have a USB headset plugged
into my Mac right now. According to the introspection part of this API, the VolumeScalar and VolumeDecibels controls exist and are writable on the input scopes of elements 1 and 2 (but not 0). The SubVolume controls are not available. (The comments explain
that the SubVolume property controls the “LFE volume” and the Volume property controls the plain, ordinary “volume”.)
The comments mention an “AudioControl object that is a subclass of AudioVolumeControl” but I have not investigated the potential implications of that.
I’m pretty sure both the AHAL volume parameter and the device’s volume properties (if implemented in the device driver) are only going to allow you to attenuate
the signal – that is, that they default to the maximum value. If you’re getting a low signal and you actually want to give it a software amplification, I’m pretty sure you’d have to experiment to find out which, if any, of the Apple-supplied units will do
that. I’ve seen no documentation at that level of detail and believe me, I’ve looked.
Steven J. Clark
VGo Communications
From: coreaudio-api-bounces+steven.clark=email@hidden [mailto:coreaudio-api-bounces+steven.clark=email@hidden]
On Behalf Of Dave Fernandes
Sent: Saturday, May 31, 2014 11:00 AM
To: Allison Newman
Cc: CoreAudio API
Subject: Re: Controlling gain on microphone
Some Apogee and Samson USB microphones and audio interfaces at the least have hardware gains that can be set by Core Audio. This is done by setting a property on the AudioDevice.
On May 31, 2014, at 9:02 AM, Allison Newman <email@hidden> wrote:
Yup, my bad. It would have perhaps been better to say that I was trying to amplify the input received from the microphone, not to actually increase the gain on the microphone itself. But that leads me to an interesting question - I’m
no expert when it comes to handling audio, but I kind of expected that most microphones that you would connect to a Mac to not have gain control on them - but rather to simply spit out a signal that was optimized for the hardware. Am I wrong in this? At
any rate, no matter how the gain on the microphone signal is handled, the technique needs to be applicable to all microphones that might be attached to the Mac, which is why I figured a software solution would be better...
On Sat, May 31, 2014 at 7:51 AM, Allison Newman <email@hidden> wrote:
>
> My application currently connects the microphone to the speakers, using AudioUnits to use my Mac as an amplifier, and it works just fine. I would like to be able to control the gain of the microphone using a slider, and was wondering if anyone had a suggestion
of how to do it. Ideally, I’m looking for a solution that introduces as little latency as possible between the microphone and the speakers.
>
> I was hoping to find an AudioUnit that I could insert into the AudioGraph to do this, but at least my initial analysis seems to indicate that this is not the case.
There is some confusion here. The gain of the microphone is a hardware property. AudioUnits are generally blobs of code that carry out signal processing. If you mean you want to add gain to the signal, then an AudioUnit can do that. But
it would quite suprising to use an AudioUnit to control the hardware gain on the microphone - this is generally controlled from system preferences and/or using APIs quite separate from AudioUnits.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden