Re: CoreAudio kernel interface
Re: CoreAudio kernel interface
- Subject: Re: CoreAudio kernel interface
- From: Jeff Moore <email@hidden>
- Date: Fri, 17 Jun 2005 11:31:06 -0700
On Jun 17, 2005, at 9:59 AM, Daniel Mack wrote:
Hi all,
I got somehow stuck in understanding how the CoreAudio kernel
interface
is supposed to be used, especially how certain parameters play
together.
To keep things easy, I started to write a driver which only has one
sample
rate, one bit-depth, 2 channels and 2 audio streams. So my
parameters are
#define NUM_SAMPLE_FRAMES 16384
#define NUM_CHANNELS 2
#define BIT_DEPTH 16
#define BYTES_PER_SAMPLE (BIT_DEPTH / 8)
In my AudioEngine, I created the IOAudioStreams and called
audioStream->setSampleBuffer(sampleBuffer, sampleBufferSize);
on them, where sampleBufferSize is (NUM_CHANNELS *
NUM_SAMPLE_FRAMES * BYTES_PER_SAMPLE).
So here are two problems I'm running in:
1)
What is the maximum value I may return in getCurrentSampleFrame()?
If I return values up to NUM_SAMPLE_FRAMES, the machine locks up
completely, presumably somewhere in the CoreAudio core. This even
happens
when I IOMalloc() the sampleBufffer with a fear-factor of 16, so I
pretty
much guess that it's not a memory access problem, though. Is there a
magic upper-limit?
Basically, getCurrentSampleFrame() is supposed to return the engine's
current position in the ring buffer. The range for this value is
going to be from 0 up to one less than the number of frames in the
ring buffer. The value is used by the Family to control the erase
head. Here's what the release notes for the family have to say about
the values it returns:
- This value doesn't need to be exact, but should never be larger
than the current sample counter.
- This value is used for the erase head process and it will erase
up to, but not including the sample frame returned by this function.
- If the value is too large, sound data that hasn't been played
will be erased.
How is the term 'SampleFrame' to be understood? Is a SampleFrame
something
that contains samples for all channels in that stream? Or does it
really
*just one sample*?
If I return values up to NUM_SAMPLES_FRAMES/2 things work somehow,
but I'd like to understand whats going on there instead of doing
nasty trial-and-error.
A sample frame in this context is the collection of all the bytes for
all the channels that correspond to a point in time. Since the
channels are interleaved, these bytes are all contiguous in the ring
buffer with the first channel being in the most significant position.
This terminology has a good explanation in <CoreAudio/
CoreAudioTypes.h> in the comments about the
AudioStreamBasicDescription structure.
2)
The examples and the documentation both tell me that I need to call
takeTimeStamp() every time the buffer wraps around. If I do so,
this works
well for outbound traffic - e.g., playback. When using the other
direction
(from the device to the host, e.g. recording), there seems to be data
erased in the ring buffer before CoreAudio was able to handle it which
ends up in massive dropouts etc.
However, if I call takeTimeStamp() when my ring is - for example - at
position 512, I get a lot less data errors but intolerable latency.
Likely as not, the problems you are having with getCurrentSampleFrame
() are likely causing the erase head to do the wrong thing.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden