Re: Catching output from OpenAL
Re: Catching output from OpenAL
- Subject: Re: Catching output from OpenAL
- From: Zack Morris <email@hidden>
- Date: Mon, 6 Dec 2010 09:14:29 -0700
On Dec 6, 2010, at 7:24 AM, Pi wrote:
> ...
>
> How can I catch the output of OpenAL (ie this line:)
>
> alSourcePlay(source->sourceId);
>
> and somehow pipe this into a buffer, or connect it up to some audio unit so I can intercept the render callback, etc? ie how can I get access to it in any way?
As far as I know, this is one of the great failings of OpenAL. Since it was written originally for sound cards, it was a one way ticket, like how when you uploaded a texture into OpenGL, it used to be slow/hard to get that data back into a ram buffer.
OpenAL needs to be updated with callbacks at the source level and also at the final output level, so that the program can get, say, 2048 byte buffers and swizzle the data however it wants. Without a system like that, it makes it very difficult to do custom fx like reverberations. We originally had a MOD-style player with callbacks that let us do whatever we wanted to sound, but since then, we've scrapped it for pre-rendered buffers in OpenAL for things like echos. I suppose it would still be possible to do something like this by running OpenAL on a separate thread and always pre-rendering all buffers by hand before passing them to the OpenAL source. There would be a lot of queuing/threading/timing issues, but it might work.
IMHO issues like this are serious enough that it's maybe time to fork OpenAL for the modern world where audio isn't much more than 5% of an application's overhead, and it's more important to have full control. I would also like to see a general purpose wrapper around CoreAudio and AudioUnits, written in the down-to-earth style of more recent APIs like SDL or Ogg Vorbis. Off the top of my head, it would look something like this:
output sound -> 2048 callback Aout as used -> output mixer or sound
output sound -> 2048 callback Bout as used -> output mixer or sound
...
output sound -> 2048 callback Xout as used -> output mixer or sound
output mixer -> 2048 callback Zout -> sound card
also:
sound card -> callback Ain as arrives -> input mixer or sound
sound card -> callback Bin as arrives -> input mixer or sound
...
sound card -> callback Xin as arrives -> input mixer or sound
input mixer -> 2048 callback Zin -> input sound
Callback Zout could have things like echo cancellation from callback Zin for games. All of the math could just be floating point buffers with sizes passed in as bytes instead of channels or frames. Convenience functions above the library would be available for 16 bit short to float conversion.
The library wouldn't even have to mention threading. Callbacks would handle all of the mutexing to be thread-safe internally.
There could even be an optional single-threaded mode where you tell the library you are able to handle your callback every, say, 100 ms, and it precalculates that you need a 4410 frame buffer. Then the callback could come in during the main event loop or as an event that the buffer needs to be updated, and to notify the library when you are done swizzling the data.
I bet CoreAudio already does much of this, but honestly I was never able to learn it because the documentation was lacking. Also, it's not cross-platform, so like most Apple APIs, game programmers have to leave it by the wayside. We've coded much of this in our engine to play Ogg Vorbis and Theora, but it was a huge pain in the @$$ because most libs today are low level (able to extract the data from a file, decompress it, and that's it) and there is no good middleware to do the steps I described above. We didn't want to get into threading, so right now we do echos and other fx as pre-rendered buffers we pass to the AL sources. This doubles or triples our ram usage unfortunately. Also, to do this in realtime would use a lot of bandwidth because the buffers wouldn't be resident on the sound card (not sure if Macs need to worry about this, but might be an issue on PCs).
I just look at it as, if you give me a sound card with an input and output buffer, and a flag that gets set when that buffer fills or that we set after we fill the buffer, then it should be straightforward to make a library like this. If there is a cross-platform lib like a "libsound" or something that provides these, I might be interested in helping out on an open source layer for all of the swizzling and mixing.
Sorry for the rant :-)
--Zack _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden