• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Distorted sound after sample rate change
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Distorted sound after sample rate change


  • Subject: Distorted sound after sample rate change
  • From: Sebastian Reimann <email@hidden>
  • Date: Fri, 23 Nov 2012 18:42:47 +0100

This one keeps me awake:
I have an OS X audio application which has to react if the user changes the current sample rate of the device.
To do this I register a callback for both in- and output devices on ‘kAudioDevicePropertyNominalSampleRate’.
So if one of the devices sample rates get changed I get the callback and set the new sample rate on the devices with 'AudioObjectSetPropertyData' and 'kAudioDevicePropertyNominalSampleRate' as the selector.
The next steps were mentioned on the mailing list and i followed them:

  • stop the input AudioUnit and the AUGraph which consists of a mixer and the output AudioUnit
  • uninitalize them both.
  • check for the node count, step over them and use AUGraphDisconnectNodeInput to disconnect the mixer from the output
  • now set the new sample rate on the output scope of the input unit
  • and on the in- and output scope on the mixer unit
  • reconnect the mixer node to the output unit
  • update the graph
  • init input and graph
  • start input and graph

Render and Output callbacks start again but now the audio is distorted. I believe it's the input render callback which is responsible for the signal but I'm not sure.
What did I forget?
The sample rate doesn't affect the buffer size as far as i know.
If I start my application with the other sample rate everything is OK, it's the change that leads to the distorted signal.
I look at the stream format (kAudioUnitProperty_StreamFormat) before and after. Everything stays the same except the sample rate which of course changes to the new value.

As I said I think it's the input render callback which needs to be changed. Do I have to notify the callback that more samples are needed? I checked the callbacks and buffer sizes with 44k and 48k and nothing was different.

I wrote a small test application so if you want me to provide code, I can show you.

I recorded the distorted audio(a sine) and looked at it in Audacity.
What I found was that after every 495 samples the audio drops for another 17 samples. I think you see where this is going: 495 samples + 17 samples = 512 samples. Which is the buffer size of my devices.
But I still don't know what I can do with this finding.
I checked my Input and Output render procs and their access of the RingBuffer(I'm using the fixed Version of CARingBuffer)
Both store and fetch 512 frames so nothing is missing here...


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Prev by Date: Re: is there something obviously wrong here? (pictures of soundwaves!)
  • Next by Date: HAL plugin (user-space driver) sandbox entitlements
  • Previous by thread: Re: is there something obviously wrong here? (pictures of soundwaves!)
  • Next by thread: HAL plugin (user-space driver) sandbox entitlements
  • Index(es):
    • Date
    • Thread