• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: ExtAudioFileOpenURL and associated functions...
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ExtAudioFileOpenURL and associated functions...


  • Subject: Re: ExtAudioFileOpenURL and associated functions...
  • From: Doug Wyatt <email@hidden>
  • Date: Fri, 17 Apr 2009 10:33:18 -0700

CAStreamBasicDescription::SetAUCanonical would have helped here, even if only for reference if you didn't want to use C++.

In a non-interleaved PCM format, bytes per packet and bytes per frame represent the size of one sample, not multiplied.

In an interleaved PCM format, bytes per packet and bytes per frame are, as you have coded, the size of nchannels' samples.

Make sense?

Doug

On Apr 17, 2009, at 9:03 , Richard Burnett wrote:

Hello All!

I am trying to open an audio file and store the data in memory as interleaved 8.24 fixed OR as two buffers in a buffer list non- interleaved 8.24 fixed for use with an AudioUnit output. I am clearly doing something wrong, or just missing some key concept of what it is supposed to be doing.

I am assuming my problem is coming from kExtAudioFileProperty_ClientDataFormat and/or kExtFileProperty_ClientChannelLayout.

The file I am loading in is 16 bit 44.1k interleaved wav file. I create an AudioStreamBasicDescription to send to ClientDataFormat and set it as such:

       outputFormat.mSampleRate = 44100.0;
	outputFormat.mFormatID = kAudioFormatLinearPCM;
	outputFormat.mFormatFlags  = kAudioFormatFlagsAudioUnitCanonical;
	outputFormat.mBytesPerPacket = sizeof(AudioUnitSampleType)*2;
	outputFormat.mFramesPerPacket = 1;
	outputFormat.mBytesPerFrame = sizeof(AudioUnitSampleType)*2;
	outputFormat.mChannelsPerFrame = 2;
	outputFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
	outputFormat.mReserved = 0;

Where I think I am getting confused is with how this controls ExtAudioFileRead. When using ExtAudioFileRead, how do I control how it fills the buffer? If I wanted to write interleaved stereo audio into a buffer OR non-interleaved stereo audio into two buffers of an AudioBufferList?

I have searched through the documentation, but I don't quite understand how to control all of this. I think the code I took this from originally was using it all as mono, so I converted everything to work stereo in the AudioUnit which I have tested and it works (wrote a function inside my buffer filling function to just generate 8.24 fixed sin wave.

When I send the buffer contents to the buffer filling routing, I can kinda hear the sound from the audio file mixed with A LOT of noise and glitching and panning issues, so clearly I am doing something VERY wrong! :)

Any help would be greatly appreciated.
Rick

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: ExtAudioFileOpenURL and associated functions...
      • From: Richard Burnett <email@hidden>
References: 
 >ExtAudioFileOpenURL and associated functions... (From: Richard Burnett <email@hidden>)

  • Prev by Date: ExtAudioFileOpenURL and associated functions...
  • Next by Date: Re: ExtAudioFileOpenURL and associated functions...
  • Previous by thread: ExtAudioFileOpenURL and associated functions...
  • Next by thread: Re: ExtAudioFileOpenURL and associated functions...
  • Index(es):
    • Date
    • Thread