RE: Strange Echo Control behaviour with VoiceProcessingIO on iOS
RE: Strange Echo Control behaviour with VoiceProcessingIO on iOS
- Subject: RE: Strange Echo Control behaviour with VoiceProcessingIO on iOS
- From: Steven Clark <email@hidden>
- Date: Thu, 24 Jul 2014 16:04:49 +0000
- Thread-topic: Strange Echo Control behaviour with VoiceProcessingIO on iOS
I have worked some with the VPIO unit on iOS and OS X, but I'm by no means an expert.
There are two switches you can toggle on the VPIO unit: one turns automatic gain control on and off, and the other turns "voice processing" on and off. I suspect Apple calls it "voice processing" because it's echo cancellation plus some other stuff. In some notes I wrote about a year ago, I mention that you can't turn AGC on and off when the VPIO unit is initialized, but you can turn VP on and off. I don't recollect how I came to that conclusion.
I suggest you try playing with these switches. I would expect AGC and VP to affect only input, but the only way to find out for sure is to perform some experiments. (To work with Apple's audio units, you must become an "experimental computer scientist" in the same sense as an "experimental physicist".)
Example code:
UInt32 uValue;
uValue = 0; // turn off AGC
status = AudioUnitSetProperty(m_audioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
MicBus, &uValue, sizeof(uValue));
CHECK_ERROR(status, "disable AGC");
uValue = 0; // turn off the VP bypass, i.e. turn on VP
status = AudioUnitSetProperty(m_audioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
MicBus, &uValue, sizeof(uValue));
CHECK_ERROR(status, "set bypass voice processing");
By the way, the shortcut in your original app - doing both input and render operations on the render callback - works on iOS but not OS X. Also I'm pretty sure there's no point in setting up two AUs on iOS whereas except in special circumstances you must use two RemoteIO's in OS X. One of the simplifications in iOS seems to be that at a low level, all audio devices in iOS are in sync, whereas on OS X they are generally not operating off the same clock. I think.
Hope this helps,
Steven J. Clark
VGo Communications
-----Original Message-----
From: coreaudio-api-bounces+steven.clark=email@hidden [mailto:coreaudio-api-bounces+steven.clark=email@hidden] On Behalf Of Fred Melbow
Sent: Wednesday, July 23, 2014 7:32 PM
To: email@hidden
Subject: Strange Echo Control behaviour with VoiceProcessingIO on iOS
Hi all,
I’m currently working on a Voip app, with a class that handles sending/receiving, jitter buffering, etc. and is already fairly well tried and tested (let’s call it my duplex instance). I send an audio stream over a network to my iPhone. My iPhone app contains one AU and within that one render callback on the input scope of the output bus (element 0). Within that callback I grab mic samples and push them into my duplex instance for sending, and within the same callback I pop audio samples from the duplex instance I received over the network and copy them into my ioData buffer so that they are played out on the iPhone soundcard. All well and good so far.
Now if I send a music stream to the iPhone and listen through headphones it sounds perfect with kAudioUnitSubType_RemoteIO, but if I switch to kAudioUnitSubType_VoiceProcessingIO then my inbound stream of music sounds very poor - the low bass frequencies are filtered out and the attack is completely stripped away (to the extent that kicks no longer have any kick at all, they sound more like a sucking air noise) and there is often complete ducking of the signal in low energy (e.g. soft strings) parts. For me this is completely unexpected since the Echo Control / Cancellation should only process the microphone samples to remove anything resembling something that was played out of the speaker, and the speaker samples should remain completely unaltered.
Since I thought maybe the reason was that my ioData buffer was being processed AFTER my render callback, I tried splitting the procedure into two render callbacks (one AU for mic with kAudioOutputUnitProperty_SetInputCallback, and one for speaker with using kAudioUnitProperty_SetRenderCallback). This made no difference to the strange audio artefacts with VoiceIO enabled. I then further tried splitting it into two AUs, and setting up a “speaker AU” with RemoteIO, and a “mic AU” with VoiceProcessingIO, but this had the effect of making my speaker samples very quiet, which I’ve read some other people experiencing. I’d be very grateful indeed if someone could clarify if the speaker-sample-processing is intended behaviour, or if someone could point me in the right direction to correctly configure the AU(s) so that only my mic samples are EC processed.
Many thanks,
Fred
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden