• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Which channels are used for what?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Which channels are used for what?


  • Subject: Which channels are used for what?
  • From: Steve Checkoway <email@hidden>
  • Date: Tue, 23 May 2006 01:53:37 -0700

I'm trying to determine what each channel in a stream is used for. I know I can use kAudioDevicePropertyStreamConfiguration to get an AudioBufferList like the one that will be passed to the IO proc. When I look at the documentation for AudioBufferList I see:

typedef struct AudioBufferList {
UInt32 mNumberBuffers;
AudioBuffer mBuffers[1];
} AudioBufferList;

so what I wonder is when I use AudioDeviceGetProperty() do I pass sizeof(AudioBufferList) and a pointer to a struct in which case mBuffers is really a pointer not an array (unlikely unless there is some static memory somewhere with this) or do I need to use AudioDeviceGetPropertyInfo() to get the size first (more likely)? (Either works in the only case I can test since mNumberBuffers is one for me.)


Now that I have the list, I can use kAudioDevicePropertyPreferredChannelLayout to get an AudioChannelLayout. (I assume the analogous question of how to actually get this has the same answer as above.) Now that I have this, I can determine how each channel should be used. How do these channels map to those in the AudioBufferList?

After that, I have the AudioBufferList and I know how each channel is used so it's time to deal with the format of each channel, except it's a stream format. I can get the stream IDs using kAudioDevicePropertyStreams and then using kAudioDevicePropertyStreamFormat (or kAudioStreamPropertyVirtualFormat if you prefer), I can get an AudioStreamBasicDescription for each stream. Finally, I can use kAudioStreamPropertyStartingChannel to figure out to which channel in the device the first (of possibly many) channels in the stream corresponds. (That last sentence is clumsy, I know.) Combined with the above, I know how the first channel should be used. What about the rest of the channels that are in the stream? To which of the 1- indexed channels in the device do they correspond?

Penultimately, given all of the above, let's say we have, for simplicity, interleaved stereo lpcm data and a device like the Mbox which Jeff mentioned a few days ago which presents itself as a collection of mono channels, each of which presumably has it's own format. We want to send the data to the left-front and right-front channels (I don't know anything about this device so I'm making this example up) or to whatever the preferred stereo channels--as returned by kAudioDevicePropertyPreferredChannelsForStereo--which the documentation says can be any two channels, and we want silence for the other channels. How would we set up an AudioConverter to handle this difference? The input format is simple, but the output format seems much trickier. The channels could be in a separate stream. The documentation seems to imply that each of these streams could contain other channels. Each of the streams could be in a different format. AudioConverterFillComplexBuffer seems like it is designed to handle this gracefully since it takes a pointer to an AudioBufferList.

Lastly, The callback for AudioConverterFillComplexBuffer takes a pointer to a pointer to an AudioStreamPacketDescription and AudioConverterFillComplexBuffer takes a pointer to one. The documentation has this to say about it.

The resulting packet format is specified in outDataPacketDescription.

I'm not sure what that means. To make matters worse, I have two contradictory pieces of documentation about that struct.


typedef struct AudioStreamPacketDescription {
SInt64 mStartOffset;
UInt64 mLength;
} AudioStreamPacketDescription;

This one has no description.
This structure describes the packet layout of a buffer of data where the size of each packet may not be the same or where there is extraneous data between packets.

struct AudioStreamPacketDescription {
    SInt64 mStartOffset;
    UInt32 mVariableFramesInPacket;
    UInt32 mDataByteSize;
};
Field Descriptions

mStartOffset
The number of bytes from the start of the buffer to the beginning of the packet.
mVariableFramesInPacket
The number of sample frames of data in the packet. For formats with a constant number of frames per packet, this field is set to 0.
mDataByteSize
The number of bytes in the packet.

From this, it looks like I am supposed to have something like the following in my callback.


static struct AudioStreamPacketDescription aspd = { 0, 0, bytesPerPacket };

*outDataPacketDescription = &aspd;



I appreciate any help anyone can provide for these questions.



Thank you,



Steve



p.s. I'm not sure why Mail decided to start double spacing the text at the end of this e-mail.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Which channels are used for what?
      • From: William Stewart <email@hidden>
  • Prev by Date: Re: AudioDeviceStop and ioProc
  • Next by Date: Re: AudioDeviceStop and ioProc
  • Previous by thread: Re: CoreMIDI problem
  • Next by thread: Re: Which channels are used for what?
  • Index(es):
    • Date
    • Thread