Re: Write IEEE float data to buffer
Re: Write IEEE float data to buffer
- Subject: Re: Write IEEE float data to buffer
- From: "tahome izwah" <email@hidden>
- Date: Tue, 1 Aug 2006 21:29:54 +0200
Hi Hari,
in addition to what Jeff has said, maybe it's a good idea for you to
look into Portaudio (http://www.portaudio.org) which is a cross
platform library for audio playback and much easier to learn than the
whole CoreAudio business (I'm still struggling with it myself -
welcome aboard! ;-) )
Best regards
--tahome
2006/8/1, Jeff Moore <email@hidden>:
On Aug 1, 2006, at 10:49 AM, Hari Seldon wrote:
> I'm new to programming in xcode, but am really interested in
> working on the mac. I'm trying to write audio data to a buffer for
> playback. I normally work on a DirectSound model; where there is a
> manager that creates a sound buffer, I write IEEE float data to the
> buffer and directsound plays it. In that case it's up to me to
> allocate the buffer, and ensure the correct lock/unlock model so
> that data is written correctly.
>
> I'm finding the documentation on CoreAudio a little harder to
> follow, but the main idea I'm seeing is that (from
> DefaultOutputUnit example, and others):
> - I'd have to open the default audio unit and initialize it
> - create a mixer (to mix multiple buffers for playback)
> - get some sort of audio unit that represents a buffer
>
> The only problem i've encountered from the examples I've seen so
> far is that they all use a callback for writing data. Is it
> possible to change the model and only write new audio data when I
> request it, rather than being told when to write? I'm trying to
> match the DirectSound model as much as I can. I realize there are
> workarounds I can do (carry and internal queue of data to write
> when the callback asks, etc), but I'd really prefer to match the
> models as much as possible. Any suggestions?
Core Audio uses a "pull" model for moving data around. This is pretty
much the opposite strategy that you are using with DirectSound. If
you want to emulate DS, you need to do that yourself.
Toward that end, you might find the OpenAL API a bit more familiar.
Otherwise, you need to handle the mixing and scheduling yourself. You
can use an instance of one of our mixer AUs to handle mixing and
several instances of AUScheduledSoundPlayer to handle scheduling
multiple streams of data for playback. You can, of course, handle the
mixing and scheduling yourself.
However you do it, you should definitely use AUHAL (aka the default
output unit) to interface with the hardware.
> Also from the examples I've seen, different channel audio are
> written to different buffers. If I just have a single stream of
> decoded IEEE wave data, is it possible to write this to a single
> buffer, where the buffer knows how to separate the data for playback?
That depends. If you need mixing and you are going to use an AU to
handle it, then you will need to de-interleave the channels. You can
use an AudioConverter to help with that.
However, if you are just playing one stream or are handling the
mixing yourself, you can feed an interleaved stream directly to AUHAL
since AUHAL includes an AudioConverter in it to make matching what
the hardware wants easy.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden