Core Audio Question for VIOP application.
Core Audio Question for VIOP application.
- Subject: Core Audio Question for VIOP application.
- From: John Draper <email@hidden>
- Date: Tue, 11 Jan 2005 15:29:39 -0800
Hi,
I can use some help and guidance on how I can use the Core Audio API's
for a VIOP application. According to the docs, it has SOME example
code "ComplexPlayThru" which really doesn't give me any clue how to get
started. Or am I barking up the wrong tree, and need to use some other
kind of Audio support.
I have very severe restrictions on my application. It HAS to use the
GSM6.10 Codec, which has to be compatible with Windows. This is
because I'm doing a port from the Windows to the Mac. There are
millions of Windows clients out there that use these API's, and my port
requires they be compatible.
Because of this compatibility restriction, I'm doomed to use a Codec
not supported by Apple. So I'm faced with choices, and seek advice on
which approach to take.... here are my choices..
1) Add the GSM6.10 codec to the list of codecs and put a wrapper around
it, so it can join the list of Codecs currently in use by
Quicktime... Unfortunately, I have NO CLUE how to do this. Nor do I
think Quicktime can store the data in the form it's stored on the
windows version. It HAS to be compatible, because the requirement is
that the Mac Clients have to also talk to the Windows clients.
2) Use the TU Berlin GSM Codec which I'm told is also compatible with
WinBlows, and do my OWN buffering (which I prefer to do - because I
have more control). But (sigh) - I just don't know enough about the Mac
Audio to be able to store sounds coming in from the Mic into a form
compatible with the Windows. Mac uses PCM encoding, and windows just
uses byte samples of 8 bit bytes stored in a buffer. The intro for the
Core Audio manager says this is possible, but I just cannot locate and
find the function I need.
Does anyone know if the Mac's Core Audio can be used to perform the
following tasks...
Transmitting:
1) Specify the sample rate of 8k (normal telephony sample rate).
2) Digitize the data from a Mic into a buffer of 8 bit bytes which I can
pass to the encoder.
3) GSM Encode the buffer and send it to the remote host as UDP packets.
Receiving:
1) grab the packets coming in from the net (UDP)
2) GSM Decode to a "play" buffer, and pass to the Code Audio routines.
I would probably create two "threads". One for transmission, and one
for receiving. Then, when the capture buffer is full, send a
"notification" to some code to GSM Encode the capture buffer into a
smaller buffer, and send it out the UDP port.
I already spent a LOT of time examining the oPhoneX application
(available in source form) for Mac OS-X using H323 protocol, but (sigh)
it uses libraries created for the OpenH323 code by sending "shell like"
commands. Certainly a heck of a lot more "Fluff" then I need. I
already examined the OpenH323 code, and (sigh) the source code modules
are named in such a stupid cryptic way, I was unable to see and
understand the scheme they're using.
Although Apple already has a lot of code that can already deal with File
formats like AIFF,
MP3 and such, this is just a little too "high level" for me, and I
cannot even figure out how to specify the sample rate, nor do I know
how to get it to store my data as 8 bit sound bytes instead of this PCM
Encoding they use.
Is it possible to specify other kinds of encoding schemes? And what the
heck is a "graph" when used in the context of the Mac's Audio Manager?
They make reference to a Graph when talking about the Audio.....
In the docs, they say the following... note my comments please.
* A flexible audio format - great! but how do I use it?
* Multichannel audio I/O - Don't need it.
* Support for both PCM and non-PCM formats - Great - but what are the
non-PCM formats?
* 32-bit floating point native-endian PCM as the canonical format -
cant use it.
* Fully specifiable sample rates - Great - just what I want - but how do
I specify it?
* Multiple application usage of audio devices - Great
* Application determined latency - Again - how do I use this?
* Ubiquity of timing information
* Both C and Java APIs
Ok, so they talk about these things in the Docs, but where is the
example code that shows us how to use it... I already studied all of
the audio related examples... none comes even close to what I want to do.
If you know of said documents, please send me exact URL's leading me to
it. Apples ADC web site is very extensive, and I'm often having a lot
of problems finding references unless I have the exact link to the data.
I also want to specify the buffer sizes, sample rates, and use a
SINGLE channel.
Can someone please point me in the right direction?
John
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden