iOS CoreAudio MIDISynth music device configuration
iOS CoreAudio MIDISynth music device configuration
- Subject: iOS CoreAudio MIDISynth music device configuration
- From: Bartosz Nowotny <email@hidden>
- Date: Wed, 04 Jul 2018 00:02:45 +0200
Hello
I need advice on how to properly configure AudioUnits in my MIDISynth iOS
app.
In my code I start by configuring AudioSession: I set the right category
(playback), preferred sample rate and buffer size and then start the
session.
Next up, I create the graph: multiple synth units
(kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
For mixer unit, I configure number of input elements (buses) and maximum
frames per slice.
For synth units, I configure the soundbank URL and maximum frames per slice.
This set up is enough for my app to successfully produce music by sending
MIDI note on/off events to specific synth units. For some soundfonts, the
produced sound is not correct, as if it was distorted. Because the
soundfonts I'm using are popular and publicly available soundfonts, tested
on multiple devices and different synths, I'm pretty certain the soundfonts
are not at fault here. My best guess is that I'm missing parts of the
configuration:
1. Is any additional configuration required for any of the AudioUnits I
use? In particular, should I configure synth units output stream format, so
that for instance, the sample rate matches what is actually used by the
hardware? Should I also configure stream format for the mixer or IO units?
How should the stream format configs look like?
2. If I do need to do the above configuration, how should I respond to
audio session route changes? I noticed, for instance, that plugging in
headphones changes the hardware output sample rate from 48kHz to 44.1kHz.
Regards,
Bartosz
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden