Re: Reading and writing from/to sound devices without AU
Re: Reading and writing from/to sound devices without AU
- Subject: Re: Reading and writing from/to sound devices without AU
- From: Brian Willoughby <email@hidden>
- Date: Thu, 10 Mar 2011 01:20:21 -0800
On Mar 10, 2011, at 00:34, Rick Mann wrote:
There are numerous schemes for modulating digital data (say, a
short text message) on an RF carrier. The data is usually very low
bandwidth, and fits in a typical audio spectrum. My could would
take in this modulated audio and demodulate it, recovering the
original text message. It may, optionally, also send the
undemodulated audio out to the speaker so it can be heard at the
same time (it would sound something like an old modem.
Sometimes what's encoded in the RF carrier is an audio signal you
want to hear directly (someone's voice, or morse code, or
whatever). For this, Audio Units are probably the right way to go.
What you describe would be very similar to a metering AU that doesn't
do anything to the audio but pass it through. In other words, it's
not entirely unheard of for an AU to do nothing but create a visual
output.
That said, I don't think an AudioUnit would be the first choice for
your ASCII demodulation.
To summarize: CoreAudio offers several options for processing data
from an audio interface.
1) You can use the HAL or AUHAL to connect directly to the audio
interface, and then your application would hook into the data via
callbacks. In this scenario, your application handles 100% of the
audio processing. The HAL itself is a bit difficult to use directly,
but it involves no AudioUnits at all. However, the AUHAL is a
special AudioUnit that handles most of the messy details of the HAL
for you. If you use the AUHAL, then it might be the only AudioUnit
in your entire application. You basically open an "output" AUHAL
even though you're not going to produce any output; just be sure to
enable the input scope and probably disable the output scope if it is
not to be used.
2) You can create an AUGraph that combines AUHAL with other
AudioUnits for processing all of your audio. You get to control the
signal flow, number of channels, mixing, combining, and everything
else. You can even handle some processing of audio within you
application instead of needing an AudioUnit for every operation. I
have noticed that each node in the graph can only go to your app or
to an AU, not both, so you have to plan your design properly if you
want to mix callbacks with AUs. I personally think this might be
your best option, since you can leverage existing AudioUnits for
common signal processing tasks, without needing to write your own low-
pass filter. All you'd really need to do is handle the specific
demodulation. Using callbacks means you might not even have to write
a single AU on your own.
3) AudioQueue is the next highest level. I don't use these much,
because they're less flexible. I cannot recall whether you have as
many options for callbacks, if any at all. In other words, this
might be an option for you, but you should double-check the
documentation to see if everything you need is possible. I think
that if you wanted to store the SDR data for later processing, then
an AudioQueue in recording mode might be a good idea. But if you
simply want to decode in real time then AUGraph is probably better.
You might be able to write an AudioQueue codec to demodulate your
radio signals, but that would truly be a hack since it would be
producing non-audio output in some cases. This is not an issue with
AUGraph, since your application can easily redirect the demodulated
output without having to force it into a real-time audio stream.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden