• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: How to fill the ioData parameter in the Audio Unit rendering call back?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to fill the ioData parameter in the Audio Unit rendering call back?


  • Subject: Re: How to fill the ioData parameter in the Audio Unit rendering call back?
  • From: james mccartney <email@hidden>
  • Date: Wed, 18 Mar 2009 10:35:07 -0700


Sorry I got distracted..

On Mar 13, 2009, at 7:05 AM, Xu Ting wrote:

I'm not sure if I'm understanding how you have this set up, so perhaps this
next advice is useless.
If you have two files and you want to have each one come out a separate
speaker, then you should be able to do this directly in the code that fills
the ring buffer. It would read the same number of frames from each file and
fill the two channels from the corresponding two files. You would not need a
mixer then.

I am not quite understanding you here. Do you mean according to the interleaved or non-interleaved status, I can directly feed the left channel data to ioData->mBuffers[0].mData and right channel data to mBuffer[1]?

Yes, that is exactly right.

...


          // Here is the most difficult part for me to understand.
          // Q1: Why does it fill the outA and outB with the same
bus's data

it is duplicating the mono sine wave into two stereo channels.

it is not two times inNumberFrames.
Each channel is a separate buffer, and each one gets filled with
inNumberFrames samples.

Well, I have to confirm it, do you mean, the ioData->mBuffers[0] means the left channel and ioData->mBuffers[1] means the right channel of the output stream?

yes.


If you are, it will really make things more easy. [The documentation didn't say anything about it, maybe it did mention it somewhere but I could not find it even use Spotlight]

and still, I had checked the ioData->mBuffers[0].mDataByteSize, when
the inNumberFrames is 512, for example, the mDataByteSize is 2048,
that's half the size I read from my ring buffer. Only if the
mBuffers[0] presents left or right channel can explain this, am I
right about this?

yes



I checked the CoreAudio Overview and it pointed out that one Frame
include 2 samples, left channel sample and the right one. So you said,
"each one gets filled with inNumberFrames samples" does mean
inNumberFrames samples for one channel. Am I right about this?

yes




Many thanks!

Tonny Xu

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: How to fill the ioData parameter in the Audio Unit rendering call back?
      • From: Xu Ting <email@hidden>
References: 
 >How to fill the ioData parameter in the Audio Unit rendering call back? (From: Xu Ting <email@hidden>)
 >Re: How to fill the ioData parameter in the Audio Unit rendering call back? (From: james mccartney <email@hidden>)
 >Re: How to fill the ioData parameter in the Audio Unit rendering call back? (From: Xu Ting <email@hidden>)

  • Prev by Date: Re: Strange behavior in audio Session Interruption Listener
  • Next by Date: Re: Socket recv behaviour in real-time context
  • Previous by thread: Re: How to fill the ioData parameter in the Audio Unit rendering call back?
  • Next by thread: Re: How to fill the ioData parameter in the Audio Unit rendering call back?
  • Index(es):
    • Date
    • Thread