• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Different sample types between Simulator and device
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Different sample types between Simulator and device


  • Subject: Re: Different sample types between Simulator and device
  • From: Will Pragnell <email@hidden>
  • Date: Fri, 30 Aug 2013 19:43:16 +0100

> I'm looking at ioData->mBuffers[i].mNumberChannels for the number of channels, and it always contains 1 on the device (but 2 on the simulator); should I ignore that in favor of what the ASBD says? 

Note that there's some difference in meaning relating to mChannelsPerFame depending on whether you're dealing with interleaved or non-interleaved audio. From CoreAudioTypes.h:

                    Typically, when an ASBD is being used, the fields describe the complete layout
                    of the sample data in the buffers that are represented by this description -
                    where typically those buffers are represented by an AudioBuffer that is
                    contained in an AudioBufferList.

                    However, when an ASBD has the kAudioFormatFlagIsNonInterleaved flag, the
                    AudioBufferList has a different structure and semantic. In this case, the ASBD
                    fields will describe the format of ONE of the AudioBuffers that are contained in
                    the list, AND each AudioBuffer in the list is determined to have a single (mono)
                    channel of audio data. Then, the ASBD's mChannelsPerFrame will indicate the
                    total number of AudioBuffers that are contained within the AudioBufferList -
                    where each buffer contains one channel. This is used primarily with the
                    AudioUnit (and AudioConverter) representation of this list - and won't be found
                    in the AudioHardware usage of this structure.

Haven't read this thread in detail as I don't have time right now, but this seems pertinent. Hope this helps!

Will



On 30 August 2013 19:32, Nathan Vonnahme <email@hidden> wrote:

On Aug 30, 2013, at 9:40 AM, Douglas Scott <email@hidden> wrote:

The first printout is 16bit  stereo data with identical (but very low amplitude) audio in both channels.  The second printout is 16bit stereo data with silence in the left channel (even slots) and audio in the right channel (odd slots).  The right-channel values do not look odd to me -- perfect range for 16bit audio.  This looks like some sort of mixer difference, or channel mapping difference (I don't have enough information to know which).

Thanks a lot for the reply, Douglas!

I'm looking at ioData->mBuffers[i].mNumberChannels for the number of channels, and it always contains 1 on the device (but 2 on the simulator); should I ignore that in favor of what the ASBD says? 

If I just take every other 16 bits like so for the second (right) channel, which I think you're suggesting:

            // Convert native SInt16 (short) to floats so we can use more vector math functions
            if (channels == 1) {
                vDSP_vflt16(samples + bytesPerChannel  // pointer arithmetic for the beginning of the second channel
                            , 2, floatSamples, 1, L);

            }


I still get really chunky data, jumping noisily between -1, -.5, 0, .5 and 1. I have a debugging routine to graph the data and it looks much different than the sine wave I see from the Simulator.


(lldb) frame variable floatSamples
(float *) floatSamples = 0x03eaa000 [0,0,0,-1.00003,-1.00003,-1.00003,0.500015,0,-0.500015,0.500015,0.500015,-1.00003,0,0,0,-1.00003,-1.00003,0,-1.00003,0,0,0,0,0,0,-1.00003,-1.00003,-1.00003,0,-0.500015,-1.00003,-1.00003,0.500015,0.500015,-1.00003,0,0,0,-1.00003,-1.00003,0,0,0,0,0,0,0,-1.00003,0,-1.00003,-0.500015,-0.500015,-0.500015,-1.00003,0.500015,-0.500015,-0.500015,-0.500015,-1.00003,0,0,0,0,-1.00003]


This is my whole messy render callback in case it makes things clearer. I'd also love to hear if I'm doing something else wrong.


extern OSStatus tonallyAnalyzeCallback(void *inRefCon,
                                       AudioUnitRenderActionFlags *ioActionFlags,
                                       const AudioTimeStamp *inTimeStamp,
                                       UInt32 inBusNumber,
                                       UInt32 L, // was inNumberFrames
                                       AudioBufferList *ioData) {

    

    

    if (*ioActionFlags & kAudioUnitRenderAction_PostRender) {

        // Cast to our tracking struct
        TonallyAudioAnalyzerStruct *taas = (TonallyAudioAnalyzerStruct *) inRefCon;

        float* floatSamples = taas->analyzeBuffer;
        float* scratchBuffer = taas->scratchBuffer;

        

        for (int i=0; i < ioData->mNumberBuffers; i++) {

            

            short channels = ioData->mBuffers[i].mNumberChannels;
            short bytesPerChannel = taas->asbd.mBytesPerFrame / taas->asbd.mChannelsPerFrame;

            SInt16* samples = (SInt16*)(ioData->mBuffers[i].mData);

            // Convert native SInt16 (short) to floats so we can use more vector math functions
            if (channels == 1) {
//                vDSP_vflt16(samples, 1, floatSamples, 1, L);
                vDSP_vflt16(samples + bytesPerChannel  // pointer arithmetic for the beginning of the second channel
                            , 2, floatSamples, 1, L);

            }
            // If it's stereo, convert to mono.
            // Add in the second channel and divide by 2 so the amplitudes aren't doubled.
            else {
                vDSP_vflt16(samples,  2, floatSamples, 1, L);
                vDSP_vflt16(samples + bytesPerChannel  // pointer arithmetic for the beginning of the second channel
                                    , 2, scratchBuffer, 1, L);
                float monoScale = 0.5;
                // Add vectors and multiply scalar in one fell swoop.
                vDSP_vasm(floatSamples, 1, scratchBuffer, 1, &monoScale, floatSamples, 1, L);
            }

            

            // Scale from 0 to 1 by dividing by the max int value
            float scaleMax =  INT16_MAX;
            vDSP_vsdiv(floatSamples, 1, &scaleMax, floatSamples, 1, L);

            TPCircularBuffer* ringBuffer = &taas->buffer;
//            NSLog(@"Jamming %ld frames OF FLOATS into the circular buffer.", L);
            int numBytesFedToRingBuf = L * sizeof(float);
            TPCircularBufferProduceBytes(ringBuffer, floatSamples, numBytesFedToRingBuf);

            


            // Notify we can read out of the ring buffer.
            // Is this okay? It seems to work.
            dispatch_async(dispatch_get_main_queue(), ^{
                [[NSNotificationCenter defaultCenter] postNotificationName:kTAAHasAudioNotification object: nil];
            });

            

            // Let the audio play in test mode
            if (taas->inputType == kTAAInputTypeLive) {
                // Silence the output of the audio unit
//                memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
                zeroBuffer(ioData->mBuffers[i].mData, ioData->mBuffers[i].mDataByteSize);
            }
        }

        

        // Is this okay? It seems to work.
        dispatch_async(dispatch_get_main_queue(), ^{
            [[NSNotificationCenter defaultCenter] postNotificationName:kTACAudioUpdatedNotification object: nil];
        });

        

    }

    

    return noErr;
}



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden



--

Will Pragnell

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Different sample types between Simulator and device
      • From: Nathan Vonnahme <email@hidden>
References: 
 >Different sample types between Simulator and device (From: Nathan Vonnahme <email@hidden>)
 >Re: Different sample types between Simulator and device (From: Douglas Scott <email@hidden>)
 >Re: Different sample types between Simulator and device (From: Nathan Vonnahme <email@hidden>)

  • Prev by Date: Re: Different sample types between Simulator and device
  • Next by Date: ios7 inter-app audio
  • Previous by thread: Re: Different sample types between Simulator and device
  • Next by thread: Re: Different sample types between Simulator and device
  • Index(es):
    • Date
    • Thread