Re: Sample code or resources for audio play through with echo cancellation?
Re: Sample code or resources for audio play through with echo cancellation?
- Subject: Re: Sample code or resources for audio play through with echo cancellation?
- From: Brian Willoughby <email@hidden>
- Date: Thu, 08 Dec 2011 11:59:42 -0800
On Dec 8, 2011, at 07:27, Zack Morris wrote:
On Dec 8, 2011, at 7:51 AM, Peter Sichel wrote:
I'm looking for the simplest way to add echo cancellation. In the
Bluetooth HFP, echo cancellation and noise reduction are optional.
The iPhone's I've tested do not make them available via Bluetooth.
The VPIO audio unit in iOS is not provided on Mac OS X. iChat has
built-in echo cancellation, but there is no API or echo
cancellation library offered to 3rd party developers [AFAIK].
There's an open source implementation of echo cancellation (SPEEX)
which has been adopted by some Mac OS X products, but I haven't
found an existing plug-in I can incorporate easily.
There is no way to just get the currently playing audio for
instance (even though that's obviously a trivial thing to do. My
guess is, maybe the media companies tell Apple not to do that or
something).
There is no reason why an application which is generating audio would
need to "get" that audio back. All you need to do is keep track of
the audio that you are generating before you hand it off to
CoreAudio. The only caveat is that when multiple applications are
generating audio, you do not have access to the mix, but I believe
that is something that Apple does not support anyway (even their own
iPod / Music app automatically mutes itself when a call comes in and
the microphone is active.
In other words, you're complaining about the lack of an option that
you do not actually need for the situation that you descrive.
All I need as a developer is access to the queue of buffers about
to be played by the hardware, and a callback of some kind to tell
me when they were just played. And this needs to be at the sample
level, don't make me mess around with timing.
This is how CoreAudio works. You open an output device and you get a
callback every time the system needs a new buffer from your
application. You have to "mess around with timing" because there is
no way that a multithreaded application will wake up when a specific
sample is played. Instead, you ask the CoreAudio system to tell you
the "presentation latency," and that will reflect the amount of time
between when you provide the buffers and when they are heard.
Echo cancellation is a very trivial thing to do with the right
approach, and if Apple doesn't provide the access we need, then I'm
going to take that to mean that they are blocking developers
intentionally.
Apple provides the access you need, but not the access that you think
you want.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden