Strange Echo Control behaviour with VoiceProcessingIO on iOS
Strange Echo Control behaviour with VoiceProcessingIO on iOS
- Subject: Strange Echo Control behaviour with VoiceProcessingIO on iOS
- From: Fred Melbow <email@hidden>
- Date: Thu, 24 Jul 2014 01:31:52 +0200
Hi all,
I’m currently working on a Voip app, with a class that handles sending/receiving, jitter buffering, etc. and is already fairly well tried and tested (let’s call it my duplex instance). I send an audio stream over a network to my iPhone. My iPhone app contains one AU and within that one render callback on the input scope of the output bus (element 0). Within that callback I grab mic samples and push them into my duplex instance for sending, and within the same callback I pop audio samples from the duplex instance I received over the network and copy them into my ioData buffer so that they are played out on the iPhone soundcard. All well and good so far.
Now if I send a music stream to the iPhone and listen through headphones it sounds perfect with kAudioUnitSubType_RemoteIO, but if I switch to kAudioUnitSubType_VoiceProcessingIO then my inbound stream of music sounds very poor - the low bass frequencies are filtered out and the attack is completely stripped away (to the extent that kicks no longer have any kick at all, they sound more like a sucking air noise) and there is often complete ducking of the signal in low energy (e.g. soft strings) parts. For me this is completely unexpected since the Echo Control / Cancellation should only process the microphone samples to remove anything resembling something that was played out of the speaker, and the speaker samples should remain completely unaltered.
Since I thought maybe the reason was that my ioData buffer was being processed AFTER my render callback, I tried splitting the procedure into two render callbacks (one AU for mic with kAudioOutputUnitProperty_SetInputCallback, and one for speaker with using kAudioUnitProperty_SetRenderCallback). This made no difference to the strange audio artefacts with VoiceIO enabled. I then further tried splitting it into two AUs, and setting up a “speaker AU” with RemoteIO, and a “mic AU” with VoiceProcessingIO, but this had the effect of making my speaker samples very quiet, which I’ve read some other people experiencing. I’d be very grateful indeed if someone could clarify if the speaker-sample-processing is intended behaviour, or if someone could point me in the right direction to correctly configure the AU(s) so that only my mic samples are EC processed.
Many thanks,
Fred
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden