• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Lost in the open ocean of Core Audio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Lost in the open ocean of Core Audio


  • Subject: Lost in the open ocean of Core Audio
  • From: Luke Evans <email@hidden>
  • Date: Mon, 23 Apr 2007 17:34:36 -0700
  • Thread-topic: Lost in the open ocean of Core Audio

.... though actually I only just set out :-)

I'm just starting to figure out audio, and I think I have some idea in terms
of a direction, but could use some advice.

I'm writing an emulator, which has a very limited kind of sound (OK, it's
really just capable of ON/OFF square wave type noises).  I've got the output
side of the audio handling working (output device, sending my
AudioDeviceIOProc calls for data), and I've tested this with a test
waveform.  Now I need to sample the output from my little emulated machine
(ON/OFF speaker state) and somehow feed this though to the output.

A number of problems come to mind:
1. There's not really much point to sampling the emulated machine at 44k/s.
Better to sample at a much lower rate
2. If I have my own timer that samples to a separate buffer, how do I
synchronise with the calls for me to fill the output buffer?  Surely I can
get out of step - though perhaps this doesn't matter too much, given the
low-fi situation.  Plus, how do I set the lowest latencies possible
(presumably by asking for a short buffer length)?

The questions I have are about my options as well as best practice:
First, can I arrange for Core Audio to poll me directly for a single sample
at a time appropriate to the rest of the audio pipeline?  Even better if I
can arrange for this sampling call to be at an appropriate rate.

Assuming I can't do this directly, what are my other options?  Is it easy to
synchronise my own sampling code with the audio sub-system, so that even if
I'm not called every sample, I can guarantee that my sample time is in sync?
I've looked at the Core Audio clock stuff, but that looks overly
complicated.

Perhaps, there's a way to implement a simple AudioDevice and pretend that my
emulator is actually a real input device (like a microphone).  Presumably
such devices can assert whatever audio streams they like (sample rates,
channels etc.), and you can then put a vari-speed unit between such a device
and the 'faster' output device(?).  If this is the best approach, then is
there a code sample that demonstrates a synthetic audio input device (i.e.
no real hardware)?

In the absence of a better idea, I'll start with a separately timed sampling
routine, stuffing a ring buffer that empties into the AudioDeviceIOProc
buffer on request.  That should at least get something working - but I have
the feeling that it won't be the 'right' solution for audio generated in
real time like this.

Cheers

Luke
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Prev by Date: AUDynamicsProcessor parameters
  • Next by Date: rendering wav files on intel
  • Previous by thread: Re: AUDynamicsProcessor parameters
  • Next by thread: rendering wav files on intel
  • Index(es):
    • Date
    • Thread