Re: iOS CoreAudio MIDISynth music device configuration
Re: iOS CoreAudio MIDISynth music device configuration
- Subject: Re: iOS CoreAudio MIDISynth music device configuration
- From: Douglas Scott <email@hidden>
- Date: Fri, 06 Jul 2018 12:35:43 -0700
I have been able to reproduce the very odd behavior of instruments played from
this bank. I’ll report back when I have more information to share.
-DS
> On Jul 6, 2018, at 10:53 AM, Bartosz Nowotny <email@hidden> wrote:
>
> A quick followup:
>
> Is it possible that some of the instruments are running out of voices (e.g.
> the aforementioned piano instrument)? As an experiment I switched synth units
> from AUMIDISynth to AUSampler and tried setting voice count. It seems the
> default is 64 voices and I tried to set it to 128 or even 256 but
> unfortunately that seemed to have no effect on the issue. I accomplished that
> by getting the ClassInfo property with global scope, casting to a dictionary,
> assigning an NSNumber to "voice count" key and finally setting the ClassInfo
> back. Is that the correct way to do this? There were no errors logged.
>
> Bartosz
>
> On Thu, Jul 5, 2018 at 12:19 PM, Bartosz Nowotny <email@hidden
> <mailto:email@hidden>> wrote:
> I'm using AVAudioSession to configure my audio session and set up preferred
> sample rate, buffer size. Other than that, I'm using exclusively the C API -
> AUGraph and AudioUnits.
>
> My app *sometimes* needs more than 1 MIDISynth unit running at a time because
> some songs require 2 different soundfonts to be loaded simultaneously. As far
> as my understanding goes, a MIDISynth unit can only load a single soundfont.
> Am I fundamentally mistaken here?
>
> Since my original email, I configured my audio session and all the audio
> units to use consistent sample rate so that no resampling has to be done at
> any point. The issue still pertains.
>
> The issue is clearly audible when using Yamaha 9ft Grand piano preset from
> CompiFONT (http://pphidden.wixsite.com/compifont
> <http://pphidden.wixsite.com/compifont>). This particular soundfont is really
> big in size. Since I only need the piano preset, I use a soundfont that has
> just that one preset extracted (download:
> https://mega.nz/#!nYoz0YxZ!gvwd7hCibvG0_n8xEunSJlBapo9d6VhvLg7uNQFsSrw
> <https://mega.nz/#!nYoz0YxZ!gvwd7hCibvG0_n8xEunSJlBapo9d6VhvLg7uNQFsSrw>).
>
> I should also say that this issue is present regardless of the number of
> MIDISynth units running - it sounds the same with 1 MIDISynth unit or more.
> Moreover, that very same soundfont and bank/preset is used in Android version
> of the app where the backing synth is FluidSynth and it sounds lovely - with
> polyphony count set to 64!
>
> If it would be helpful, I can record how the piano sounds in my iOS app vs a
> synth on Windows or Android.
>
> Regards,
> Bartosz
>
> On Thu, Jul 5, 2018 at 4:01 AM, email@hidden
> <mailto:email@hidden> <email@hidden
> <mailto:email@hidden>> wrote:
> Are you using the C API or the Objective C API?
>
> Why do you have multiple 16-channel MIDISynth units running? You could
> possibly run out of CPU because they cannot steal voices from each other.
>
> If your MIDISynth code works for one bank but not another, I find it hard to
> imagine it is a configuration issue.
>
> Can you point me to the banks in question?
>
> -DS
>
> > On Jul 3, 2018, at 3:02 PM, Bartosz Nowotny <email@hidden
> > <mailto:email@hidden>> wrote:
> >
> > Hello
> >
> > I need advice on how to properly configure AudioUnits in my MIDISynth iOS
> > app.
> >
> > In my code I start by configuring AudioSession: I set the right category
> > (playback), preferred sample rate and buffer size and then start the
> > session.
> > Next up, I create the graph: multiple synth units
> > (kAudioUnitSubType_MIDISynth) -> multichannel mixer -> remote IO.
> > For mixer unit, I configure number of input elements (buses) and maximum
> > frames per slice.
> > For synth units, I configure the soundbank URL and maximum frames per slice.
> >
> > This set up is enough for my app to successfully produce music by sending
> > MIDI note on/off events to specific synth units. For some soundfonts, the
> > produced sound is not correct, as if it was distorted. Because the
> > soundfonts I'm using are popular and publicly available soundfonts, tested
> > on multiple devices and different synths, I'm pretty certain the soundfonts
> > are not at fault here. My best guess is that I'm missing parts of the
> > configuration:
> >
> > 1. Is any additional configuration required for any of the AudioUnits I
> > use? In particular, should I configure synth units output stream format, so
> > that for instance, the sample rate matches what is actually used by the
> > hardware? Should I also configure stream format for the mixer or IO units?
> > How should the stream format configs look like?
> > 2. If I do need to do the above configuration, how should I respond to
> > audio session route changes? I noticed, for instance, that plugging in
> > headphones changes the hardware output sample rate from 48kHz to 44.1kHz.
> >
> > Regards,
> > Bartosz
> >
> > _______________________________________________
> > Do not post admin requests to the list. They will be ignored.
> > Coreaudio-api mailing list (email@hidden
> > <mailto:email@hidden>)
> > Help/Unsubscribe/Update your Subscription:
> >
> >
> > This email sent to email@hidden <mailto:email@hidden>
>
>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden