• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Lost in the open ocean of Core Audio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lost in the open ocean of Core Audio


  • Subject: Re: Lost in the open ocean of Core Audio
  • From: Daniel Oberhoff <email@hidden>
  • Date: Mon, 7 May 2007 10:06:27 +0200

Hi,

First off I would use an AudioConverter to translate from your emulator to the output device. So when the OutputDevice calls for a given number of frames you feed that many frames into the Converter which will give the frames in the output device's preferred format (also does sample rate conversion). The simplest way in terms of timing would be that your emulator generates sound on demand, i.e. only when asked by the output device (and through the converter). I suppose this should be ok for your scenario, since the generator should be rather cheap. The other way is using a ringbuffer as you say, and timing your emulator some other way. Then either your emu and audio are in perfect sync (I suppose possible since they somewhere end up using the same set of timers anyhow) or you need a way to compensate, i.e. drop frames or have the emulator speed up/ slow down a few ticks when the buffer under/overflows.

May I ask what kind of emulator you are writing? Sounds interesting :).

Daniel

Am 24.04.2007 um 02:34 schrieb Luke Evans:

.... though actually I only just set out :-)

I'm just starting to figure out audio, and I think I have some idea in terms
of a direction, but could use some advice.


I'm writing an emulator, which has a very limited kind of sound (OK, it's
really just capable of ON/OFF square wave type noises). I've got the output
side of the audio handling working (output device, sending my
AudioDeviceIOProc calls for data), and I've tested this with a test
waveform. Now I need to sample the output from my little emulated machine
(ON/OFF speaker state) and somehow feed this though to the output.


A number of problems come to mind:
1. There's not really much point to sampling the emulated machine at 44k/s.
Better to sample at a much lower rate
2. If I have my own timer that samples to a separate buffer, how do I
synchronise with the calls for me to fill the output buffer? Surely I can
get out of step - though perhaps this doesn't matter too much, given the
low-fi situation. Plus, how do I set the lowest latencies possible
(presumably by asking for a short buffer length)?


The questions I have are about my options as well as best practice:
First, can I arrange for Core Audio to poll me directly for a single sample
at a time appropriate to the rest of the audio pipeline? Even better if I
can arrange for this sampling call to be at an appropriate rate.


Assuming I can't do this directly, what are my other options? Is it easy to
synchronise my own sampling code with the audio sub-system, so that even if
I'm not called every sample, I can guarantee that my sample time is in sync?
I've looked at the Core Audio clock stuff, but that looks overly
complicated.


Perhaps, there's a way to implement a simple AudioDevice and pretend that my
emulator is actually a real input device (like a microphone). Presumably
such devices can assert whatever audio streams they like (sample rates,
channels etc.), and you can then put a vari-speed unit between such a device
and the 'faster' output device(?). If this is the best approach, then is
there a code sample that demonstrates a synthetic audio input device (i.e.
no real hardware)?


In the absence of a better idea, I'll start with a separately timed sampling
routine, stuffing a ring buffer that empties into the AudioDeviceIOProc
buffer on request. That should at least get something working - but I have
the feeling that it won't be the 'right' solution for audio generated in
real time like this.


Cheers

Luke
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40googlemail.com


This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: Lost in the open ocean of Core Audio
      • From: Luke Evans <email@hidden>
  • Prev by Date: Re: Streaming audio to built-in audio device
  • Next by Date: Re: USB MIDI sample driver
  • Previous by thread: Re: Mirroring Audio Output
  • Next by thread: Re: Lost in the open ocean of Core Audio
  • Index(es):
    • Date
    • Thread