Re: Write IEEE float data to buffer
Re: Write IEEE float data to buffer
- Subject: Re: Write IEEE float data to buffer
- From: Hari Seldon <email@hidden>
- Date: Thu, 03 Aug 2006 14:45:07 -0400
Hi Hari,
in addition to what Jeff has said, maybe it's a good idea for you to
look into Portaudio (http://www.portaudio.org) which is a cross
platform library for audio playback and much easier to learn than the
whole CoreAudio business (I'm still struggling with it myself -
welcome aboard! ;-) )
Best regards
--tahome
2006/8/1, Jeff Moore <email@hidden>: On Aug 1, 2006, at 10:49 AM,
Hari Seldon wrote:
> I'm new to programming in xcode, but am really interested in
> working on the mac. I'm trying to write audio data to a buffer for
> playback. I normally work on a DirectSound model; where there is a
> manager that creates a sound buffer, I write IEEE float data to the
> buffer and directsound plays it. In that case it's up to me to
> allocate the buffer, and ensure the correct lock/unlock model so
> that data is written correctly.
>
> I'm finding the documentation on CoreAudio a little harder to
> follow, but the main idea I'm seeing is that (from
> DefaultOutputUnit example, and others):
> - I'd have to open the default audio unit and initialize it
> - create a mixer (to mix multiple buffers for playback)
> - get some sort of audio unit that represents a buffer
>
> The only problem i've encountered from the examples I've seen so
> far is that they all use a callback for writing data. Is it
> possible to change the model and only write new audio data when I
> request it, rather than being told when to write? I'm trying to
> match the DirectSound model as much as I can. I realize there are
> workarounds I can do (carry and internal queue of data to write
> when the callback asks, etc), but I'd really prefer to match the
> models as much as possible. Any suggestions?
> > > Core Audio uses a "pull" model for moving data around. This is pretty
> much the opposite strategy that you are using with DirectSound. If
> you want to emulate DS, you need to do that yourself.
Toward that end, you might find the OpenAL API a bit more familiar.
Otherwise, you need to handle the mixing and scheduling yourself. You
can use an instance of one of our mixer AUs to handle mixing and
several instances of AUScheduledSoundPlayer to handle scheduling
multiple streams of data for playback. You can, of course, handle the
mixing and scheduling yourself.
However you do it, you should definitely use AUHAL (aka the default
output unit) to interface with the hardware.
> Also from the examples I've seen, different channel audio are
> written to different buffers. If I just have a single stream of
> decoded IEEE wave data, is it possible to write this to a single
> buffer, where the buffer knows how to separate the data for playback?
That depends. If you need mixing and you are going to use an AU to
handle it, then you will need to de-interleave the channels. You can
use an AudioConverter to help with that.
However, if you are just playing one stream or are handling the
mixing yourself, you can feed an interleaved stream directly to AUHAL
since AUHAL includes an AudioConverter in it to make matching what
the hardware wants easy.
Thanks for the suggestions. PortAudio does look pretty cool, but
since we already have a working windows solution and we can't take any
dependencies like that, we'll have to avoid it.
Looks like CoreAudio is the way to go, but after discussing the matter
with another mac friend I am left wondering about one other thing;
what about the QuickTime Audio api's? I did a fair bit of reading,
and it looks like they're too high level for what I want to do. I
also realize this is probably the wrong mailing list, but just thought
I'd find out.
Is it possible to use the same IEEE float data source to accomplish
streaming with QuickTime Audio?
From what i've read it really looks like quicktime is designed to
either play files or resources, from fixed buffers, and add additional
codec support if needed. It looked like I could possibly setup double
buffering, override the double buffering callbacks and maybe
accomplish the streaming I'm looking to do. Anyway, I'm just curious
if someone else has encountered this debate, or if what i'm looking
for is too low level for the Quicktime api's?
Thanks
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden