Re: IOAudioMixerEngine.cpp
Re: IOAudioMixerEngine.cpp
- Subject: Re: IOAudioMixerEngine.cpp
- From: Nathan Aschbacher <email@hidden>
- Date: Wed, 07 Aug 2002 23:10:31 -0700
Well I can think of one quick way to boost performance of the default
mixOutputSamples function. You're doing 4 scalar single precision floating
point additions in order. This seems like a perfect candidate for loading
the four values from the mixBuf into a 128-bit vector and loading the four
values from sourceBuf into another 128-bit vector and then just doing a
vector addition operation on them and storing the result back to the mixBuf.
At the very least you'd be moving some computation from a general purpose
processing unit off to a more specialized unit that sits there idle plenty
of the time. I may try changing the code to function like that and
recompile the base system audio drivers from CVS and see how it works.
Anyhow that's beside the point. Matt is right. What I'm looking to do is
be able to take load off the CPU. Whether the sound processor is faster or
not isn't as important to the process as taking the load of the CPU. What
I'm trying to build here is what's called "hardware accelerated audio" in
the Windows world. Where the audio hardware can reach into the systems
audio kernel buffers very early in the process and perform much of the
required work so the CPU doesn't have to. It has a measurable performance
difference on Intel PC's even in non-gaming scenarios simply because the CPU
is freed up during the playing of audio. So what I've been trying to
determine is 1) is this even possible? 2) If not, why not? What's the
limiter in MacOS X that prevents this kind of functionality? 3) If so, then
where ought I to be looking to hook into MacOS X's audio processing stages
to have the most success using an external audio processor to do some work
and free up the main CPU?
So if IOAudioEngine::mixOutputSamples is the last stage of the CPU handling
the audio buffers, then I'm going to want to attack this problem from higher
up the chain. My question then becomes, where? I'm to understand that this
has never been done before on the Mac, and myself and an eager group of
other developers are interested in seeing this happen. However where the
Windows driver development documentation for a DirectSound driver make this
process and it's purpose very clear, the lack of an obvious parallel to the
MacOS X sound system is making things complicated.
It also sounds like Jaguar may provide me with some better tools to work
with however. The native format capability ought to be very handy. My
concern was that the CPU's burden of running the float -> int conversions so
that the card (which only works on 32-bit integers) can do some of the work
would be more overhead added than would be saved by doing the mixing on the
card.
Anyhow I'm still trying to piece together a clear picture of how what I
desire to do fits into the MacOS X CoreAudio API's stages and capabilities.
Though I VERY much appreciated the thoughtful responses.
Thank You,
Nathan
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.