• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Different sample types between Simulator and device
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Different sample types between Simulator and device


  • Subject: Re: Different sample types between Simulator and device
  • From: Nathan Vonnahme <email@hidden>
  • Date: Sat, 21 Sep 2013 01:45:59 -0800

On Sep 13, 2013, at 3:47 PM, Doug Wyatt <email@hidden> wrote:

I can't emphasize enough: when you are dealing with AURemoteIO (or any AudioUnit for that matter), you should always be specific about the format you wish to use. In the case of AURemoteIO, you should always be using:

AudioStreamBasicDescription myformat;
... fill out myformat to the myRenderFormat you want to provide for output ...
OSStatus err = AudioUnitSetProperty(myRemoteIO, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &myRenderFormat, sizeof(myRenderFormat));

This specifies the format of the buffers you supply to the output unit. The implementation will convert to the format it is using. You can query the format of element 0's output scope to find out what *that* format is.

If you do not set a client format, you are at the mercy of the implementation's defaults, which will differ across routes, the device vs. simulator, and OS releases.

Thank you, Doug, for the reply and I apologize for the lag in response, but I definitely was already using that exact AudioUnitSetProperty line to set an ASBD with signed integers, both on the input scope's bus 0 and the output scope's bus 1 (I'm going mic -> AudioGraph -> AUGraphAddRenderNotify callback ). I set it using,

    audioFormat.mFormatFlags = kAudioFormatFlagsCanonical; // signed integers, 
    audioFormat.mBitsPerChannel = 16;

and both AudioUnitGetProperty(kAudioUnitProperty_StreamFormat)and CAShow (below) show signed ints.

AudioUnitGraph 0x25D0000:
  Member Nodes:
node 1: 'auou' 'rioc' 'appl', instance 0x7830b70 O  
  Connections:
node   1 bus   1 => node   1 bus   0  [ 2 ch,  44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
  CurrentState:
mLastUpdateError=0, eventsToProcess=F, isRunning=F

Curiously, I noticed tonight that the iOS7 Simulator behaves the same as my devices, providing 32-bit float samples to my RenderNotify callback. But the iOS 6 and 5 simulators still have sInt16s there as I wrote previously. 

I love getting the float samples, but I'm still confused how I should switch for the very different behaviors. Right now I have (using version string parsing macros I found) this, and it works on all my simulators and test devices. But it creeps me out.

        if (TARGET_IPHONE_SIMULATOR && SYSTEM_VERSION_LESS_THAN(@"7.0") ) {
            // In Simulator before iOS 7 I get 1 buffer with 2 identical channels containing AudioSampleType (SInt16) samples

            

            AudioSampleType* samples = (AudioSampleType*)(ioData->mBuffers[0].mData);
            float* floatSamples = taas->analyzeBuffer;

            

            // Convert native AudioSampleType(SInt16/short) to floats so we can use more vector math functions

            

            // Discard every other sample (right channel). But we still have the same number of Frames.
            vDSP_vflt16(samples, 2, floatSamples, 1, inNumberFrames);

            

            // Scale from 0 to 1 by dividing by the max int value
            float scaleMax =  (INT16_MAX + 1 );
            vDSP_vsdiv(floatSamples, 1, &scaleMax, floatSamples, 1, inNumberFrames);

            

            taas->bytesFedToRingBuf = inNumberFrames  * sizeof(float) ;
            assert(inNumberFrames * sizeof(float) == ioData->mBuffers[0].mDataByteSize);

            

            TPCircularBufferProduceBytes(ringBuffer, floatSamples, taas->bytesFedToRingBuf);
        }
        else {
            // Apparently on a real iOS device we magically already have 32-bit floats here. Boom!

            

            // I have 2 buffers with 1 channel each, but I'll just use the left (mBuffer[0])

            

            float* samples = (float*)(ioData->mBuffers[0].mData);
            taas->bytesFedToRingBuf = inNumberFrames * sizeof(float) ;
            assert(inNumberFrames * sizeof(float) == ioData->mBuffers[0].mDataByteSize);

            

            TPCircularBufferProduceBytes(ringBuffer, samples, taas->bytesFedToRingBuf);
        }


On Sep 4, 2013, at 23:44 , Nathan Vonnahme <email@hidden> wrote:

Well, I was really hoping for more response from my big remaining question,

does anyone know how the callback should decide whether to cast mData to float* (my devices) or sInt16* (simulator)? I guess for now I will hardcode it using  #if TARGET_IPHONE_SIMULATOR. There's nothing I see in the AUGraphAddRenderNotify params to indicate the format of mData...

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Re: Different sample types between Simulator and device (From: Nathan Vonnahme <email@hidden>)
 >Re: Different sample types between Simulator and device (From: Paul Davis <email@hidden>)
 >Re: Different sample types between Simulator and device (From: Nathan Vonnahme <email@hidden>)
 >Re: Different sample types between Simulator and device (From: Doug Wyatt <email@hidden>)

  • Prev by Date: AURenderCallbacks occasionally stop occurring after system resumes from sleep.
  • Next by Date: How many are Not using deprecated iOS Audio APIs but still waiting on a good Audiob.us SDK
  • Previous by thread: Re: Different sample types between Simulator and device
  • Next by thread: RE: AudioHardwarePlugIn vs AudioServerPlugIn
  • Index(es):
    • Date
    • Thread