Re: Questions on mixing .wav files
Re: Questions on mixing .wav files
- Subject: Re: Questions on mixing .wav files
- From: Jay Bone <email@hidden>
- Date: Mon, 17 Aug 2009 14:17:47 -0700
Thanks everyone for your replies here. But now I am noticing something very strange.
I've began re-coding my app to use AudioFileOpenURL/AudioFileCreateWithURL() to mix all of my .wav's down into a single .caf, then using AVAudioPlayer to play that .caf, which works fine.
But I then observe that my old code which has not yet been refactored to use the lower level APIs suddenly works.
It seems that using the AVAudioPlayer to play the .caf makes my old code with AudioServicesPlaySystemSound/NSThread/sleepForInterval work again. Why is this?
Does this AVAudioPlayer call give the process some special privilege, or otherwise affect latency in subsequent calls to AudioServicesPlaySystemSound?
Is it then a valid workaround to play a .caf file at some point during application start / initialization (possibly a silent .caf) in order to achieve the same behavior I saw under 2.2.x?
Confused
-J
On Sat, Aug 8, 2009 at 1:07 AM, Jay Bone
<email@hidden> wrote:
Hello CoreAudio gurus,
I am trying to mix multiple small .wav files to play on an iPhone 3.0.
Prior to the 3.0.1 SDK (using 2.2.1) I achieved this quite easily using AudioServicesPlaySystemSound() and an NSThread with sleepForTimeInterval.
But something must have changed wrt either NSThread sleeping, or AudioServicesPlaySystemSound as this approach no longer works for me in 3.0.
My suspicion is that instead the AudioUnit APIs should be used, and that is all new to me.
So I want to define an AudioUnit output node of subtype kAudioUnitSubType_RemoteIO
I then connect this to a kAudioUnitSubType_MultiChannelMixer.
Next would be connect my .wav's to the kAudioUnitSubType_MultiChannelMixer, but it is not clear to me how this is done.
Is there an AudioUnit which can be used to wrap a .wav file?
I've found this post to this group:
http://lists.apple.com/archives/coreaudio-api/2009/Jul/msg00066.html
Which seems to indicate the following can be done:
ExtAudioFileOpenURL(voice) -
| -
AUMixer(kAudioUnitSubType_MultiChannelMixer) - AUOutput(kAudioUnitSubType_RemoteIO)
ExtAudioFileOpenURL(music) -
What I don't understand is, how is an ExtAudioFileOpenURL() "connected" to the mixer AudioUnit? The docs show ExtAudioFileOpenURL() interface defined as
OSStatus ExtAudioFileOpenURL (
CFURLRef inURL,
ExtAudioFileRef *outExtAudioFile
);
Can I "connect" (via AUGraphConnectNodeInput() ) an ExtAudioFileRef to a mixer node? If so, it's not clear to me from the docs how this is done.
Also I'd like to mix in the same .wav multiple times at different time offsets/intervals into the output. I noticed some kind of Delay Audio Unit in the docs, but nothing mentioned for iPhone specifically. E.g. a bassdrum .wav sound every .125 seconds. Is there a delay audio unit which can be used to achieve this?
Can this all be done without implementing my own callbacks to manually fill buffers?
Thanks in advance for any tips or help with this.
-J
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden