Re: iChat's echo cancellation (was Re: Output Capture)
Re: iChat's echo cancellation (was Re: Output Capture)
- Subject: Re: iChat's echo cancellation (was Re: Output Capture)
- From: Jeff Moore <email@hidden>
- Date: Wed, 18 Jul 2007 17:04:59 -0700
On Jul 18, 2007, at 4:21 PM, Andrew Kimpton wrote:
iChat works entirely with public APIs and their own echo
cancellation code. No private APIs are used.
You speak as if we have access to the iChat code, so it's not a
given at all that just because iChat can do something, so can we.
Why would you need access to the iChat code? Echo Cancellation is a
major research area in signal processing. It is written about in
numerous text books and papers every year. I'm sure you can dig up
something useful with even a casual Google search.
If it's the case that iChat only uses public APIs, how about some
of that code gets pasted into a new Sample Code project and put on
ADC? That would seem to make the most sense since people like us
repeatedly are looking for a cold, hard solution. And if you
don't won't to do that, could you help us understand why not? If
the reason is that it would be unsupported code, then we'd say
back that we understand and acknowledge that and we assume the
consequences of that.
Since we don't provide an Echo Cancellation library, obviously
there won't be any sample code for that. But that aside, there is
all sorts of sample code for all the APIs that iChat uses out of
Core Audio.
Some folk here seem (from my reading of the messages) to want access
to the audio emitted by the computer itself in order to eliminate it
from captured audio as part of an echo cancellation process ?
Indeed.
They seem to be strengthening the argument by suggesting that iChat
does the same thing ?
But does it ?
Ah! The $64 question!
To be honest, I don't really know what algorithms iChat uses. Not
being all that strong in the signal processing department, I'd
probably not understand the math anyway =)
Perhaps iChat's audio enhancement and echo cancellation doesn't use
the computers own current output at all ? But relies on other
approaches ? If I recall correctly the iSight for example employs
two microphones with beam shaping to enhance the quality of the
sound on the basis that the persona talking is in front of the
camera. Similar knowledge of the physical layout of two microphones
in a notebook (assuming true stereo input) could also be used to
achieve a similar effect.
Indeed. I imagine that there are many algorithms in this domain that
don't require all of the output of the device to do their job. I'm
sure that iChat must be making use of them because I know they don't
have access to the output of the device.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden