• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
CoreAudio kernel interface
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

CoreAudio kernel interface


  • Subject: CoreAudio kernel interface
  • From: Daniel Mack <email@hidden>
  • Date: Fri, 17 Jun 2005 18:59:36 +0200

Hi all,

I got somehow stuck in understanding how the CoreAudio kernel interface
is supposed to be used, especially how certain parameters play together.

To keep things easy, I started to write a driver which only has one sample
rate, one bit-depth, 2 channels and 2 audio streams. So my parameters are

#define NUM_SAMPLE_FRAMES	16384
#define NUM_CHANNELS		2
#define BIT_DEPTH			16
#define BYTES_PER_SAMPLE	(BIT_DEPTH / 8)

In my AudioEngine, I created the IOAudioStreams and called

	audioStream->setSampleBuffer(sampleBuffer, sampleBufferSize);

on them, where sampleBufferSize is (NUM_CHANNELS * NUM_SAMPLE_FRAMES * BYTES_PER_SAMPLE).

So here are two problems I'm running in:

1)
What is the maximum value I may return in getCurrentSampleFrame()?
If I return values up to NUM_SAMPLE_FRAMES, the machine locks up
completely, presumably somewhere in the CoreAudio core. This even happens
when I IOMalloc() the sampleBufffer with a fear-factor of 16, so I pretty
much guess that it's not a memory access problem, though. Is there a
magic upper-limit?

How is the term 'SampleFrame' to be understood? Is a SampleFrame something
that contains samples for all channels in that stream? Or does it really
*just one sample*?

If I return values up to NUM_SAMPLES_FRAMES/2 things work somehow,
but I'd like to understand whats going on there instead of doing
nasty trial-and-error.

2)
The examples and the documentation both tell me that I need to call
takeTimeStamp() every time the buffer wraps around. If I do so, this works
well for outbound traffic - e.g., playback. When using the other direction
(from the device to the host, e.g. recording), there seems to be data
erased in the ring buffer before CoreAudio was able to handle it which
ends up in massive dropouts etc.
However, if I call takeTimeStamp() when my ring is - for example - at
position 512, I get a lot less data errors but intolerable latency.

Development is done on a iMac G5.

Can anybody enlight me, please?
Thanks,
Daniel
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: CoreAudio kernel interface
      • From: Jeff Moore <email@hidden>
    • Re: CoreAudio kernel interface
      • From: Daniel Mack <email@hidden>
  • Prev by Date: Re: Acquiring input Data
  • Next by Date: Re: CoreAudio kernel interface
  • Previous by thread: Re: Acquiring input Data
  • Next by thread: Re: CoreAudio kernel interface
  • Index(es):
    • Date
    • Thread