• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Core Audio for a Game Engine
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Core Audio for a Game Engine


  • Subject: Re: Core Audio for a Game Engine
  • From: Bill Stewart <email@hidden>
  • Date: Wed, 20 Aug 2003 22:04:35 -0700

The DLSSynth is actually pretty suited to this kind of usage - there are a number of editors for both sound font and DLS files, it will do the pitch shifting for you, volume (just use different velocity values), reverb (or not - depends how you author the sound), panning, etc...

As for the pitch change - yes we realised that this was missing, so we've added for Panther a rate control converter unit - it will pitch shift...

However, I'd also encourage you to look at the 3DMixer, it has several different options for doing spatialized effects (*including* doppler effect which I think is really what you are after), distance filtering, reverb (on or off), and of course it mixes :) There's an example in the CoreAudio SDK (3DMixer) - and in the Panther SDK we've revved this example a bit to show you how to turn on and off the various options that affect the rendering cost / quality tradeoff...

Bill

On Wednesday, August 20, 2003, at 08:24 AM, David Duncan wrote:

On Wednesday, August 20, 2003, at 10:39 AM, Brian Greenstone wrote:

on 8/20/03 9:18 AM, David Duncan at email@hidden wrote:

output, and one mixer unit connected to the reverb. The mixer unit
would be configured to mix n channels of audio for which you could
provide your inputs on.

How does one go about configuring it that way? I don't see any settings for
that anywhere.

You use the kAudioUnitProperty_MakeConnection property to connect two audio units together. You can also use the AUGraph APIs (in AudioToolbox) to connect audio units - this route could be slightly simpler for some needs.

Volume is a parameter on the mixer's input side (i.e. a parameter for
each element going into the mixer).

You mean like each sample? In the sample code that I used as my starting
point they were manually calculating the volume of each sample in the
callback function. However, this doesn't really work because it would only
change the volume when the current buffer is exhausted and new data is
copied in. For games, the volume change needs to be instantaneous, so
there's got to be a way to change the volume of the playback unit itself and
not just by tweaking the sample values.

There are a number of places you can change the volume, all depending on what you want to do
1) On the output unit to affect overall volume (i.e. a volume control)
2) On the inputs to the mixer unit to affect relative volumes between samples. So you can have a really loud explosion or a very quiet rain drop effect.
3) Any other audio unit that advertises that they can change the volume (typically this is output units and mixers, but it could happen elsewhere).

With CoreAudio you can schedule volume changes to occur at pretty much anytime within the next audio buffer - but also note that by default the buffers in CoreAudio are small, on the order of 512 samples typically. So your latency is already pretty small.

Pitch is harder as I don't believe
there is a built in pitch changer audio unit (and on such a unit you
would just change a parameter on it and feed it's output to the mixer).

Yeah, that's why Core Audio baffles me. Pitch changes are a fundamental
part of any game's audio system, yet there doesn't appear to be any way to
do it in Core Audio.

I agree that sometimes Core Audio does seem to lack things that you would expect for a game, however I think that this is because most of the input has come from the music community, where such effects are encapsulated into synths (like the DLS synth) that respond to midi.

As for why your property calls aren't having effect, all I can say is
to check the return value - you should be getting some error if it is
rejecting your changes. And if your not getting an error, it could be
an AU further upstream that is reconfiguring an internal Audio
Converter (such as the Output Unit which does this for sample rate
conversion if you specify a rate that the hardware doesn't support).

Depending on which Unit I mess with I get different results. If I mess with
the Output Unit I get a "Modifications not allowed" error, but if I modify
the Mixer Unit, I get no errors, but nothing happens.

Are you doing this on the input or the output scope? You will get different results depending on which scope you use, because the output scope is the hardware on an output unit, while the input scope is actually the input to an audio converter AU. You also can't change this property once you've initialized the AU (until you Uninitialize it). Unlike with Sound Manager the sample rate can't be used as a way to speed up a sound because to those AUs that must bridge sample rates (i.e. an Output or Converter Unit) the point would be to convert the sample rate by creating/removing samples instead of simply passing the given samples.

Now that I write that however, I realize that you could do pitch with an Audio Converter by asking it to convert the audio from one sample rate to another and then feeding that input to an Audio Unit working at the standard sample rate. For example, you would setup the converter to convert from 44100 -> target sample rate, and then feed that audio back to the input that is still setup for 44100. The downside is that this may be harder to do in realtime than writing something yourself.

Seems that Core Audio still has a long way to go (in terms of documentation,
sample code, and high level calls). I guess right now it's really just a
low level thing for people writing audio drivers and such. Seems that I
should probably stick with using Sound Manager for actually playing sounds.

Documentation is a common lament on Mac OS X =). Sample Code I suppose depends more on what your doing. High Level calls are, from the POV of a game programmer at least, possibly not in context, as CoreAudio wasn't really designed for the "one function call to play" model. CoreAudio however, is a great system to build such a model on as it allows you a lot of control over what your audio pipeline looks like.
--
Reality is what, when you stop believing in it, doesn't go away.
Failure is not an option. It is a privilege reserved for those who try.

David Duncan
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


-- mailto:email@hidden
tel: +1 408 974 4056

________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
  • Follow-Ups:
    • Re: Core Audio for a Game Engine
      • From: Philippe Wicker <email@hidden>
References: 
 >Re: Core Audio for a Game Engine (From: David Duncan <email@hidden>)

  • Prev by Date: Re: Bundles, CFStrings and release/retain
  • Next by Date: Re: Sequence position display
  • Previous by thread: RE: Core Audio for a Game Engine
  • Next by thread: Re: Core Audio for a Game Engine
  • Index(es):
    • Date
    • Thread