Re: DefaultOutputDevice timestamps
Re: DefaultOutputDevice timestamps
- Subject: Re: DefaultOutputDevice timestamps
- From: email@hidden
- Date: Mon, 22 Oct 2001 04:59:54 -0500 (CDT)
I finally managed to get time to test things out. The code from
Developers/Examples that you mention below does indeed have the timestamp
working. That code uses the AudioUnit frameworks whereas I am just using
the AudioDevice (technically AudioDeviceID for most associated code) given
to me by
AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice,
&count, (void *) &device);
and through calls such as AudioDeviceStart(device, appIOProcSWC); so I
can't really derive a solution from this without switching to using
AudioUnits. It doesn't seem like that should be necessary and the callback
function expected by an AudioUnit removes the very "outOutputTime" field
that I'm trying to use - I just want to know when my output is really
going out so my output is properly synchronized and for resynchronizing
waveform generating functions. If an AudioDevice gives me AudioTimeStamps
with the kAudioTimeStampHostTimeValid flag set to true in the mFlags
variable, shouldn't that guarantee me that the time is
indeed valid? I'm still obviously confused by why this isn't working when
it says it should be. Should I be setting a property for the
AudioDevice or performing an initialization on it after I retrieve it
through the above getProperty call? It may help to know the related calls
I make, so here are the two methods I use for setup and to instigate the
AudioDevice to ask my callback method for output:
- (void) setup
{
OSStatus err = kAudioHardwareNoError;
UInt32 count;
device = kAudioDeviceUnknown;
// get the default output device for the HAL
count = sizeof(device);
err =
AudioHardwareGetProperty(kAudioHardwarePropertyDefaultOutputDevice,
&count, (void *) &device);
// get the buffersize that the default device uses for IO
count = sizeof(deviceBufferSize);
err = AudioDeviceGetProperty(device, 0, false,
kAudioDevicePropertyBufferSize, &count, &deviceBufferSize);
// get a description of the data format used by the default device
count = sizeof(deviceFormat);
err = AudioDeviceGetProperty(device, 0, false,
kAudioDevicePropertyStreamFormat, &count, &deviceFormat);
}
- (BOOL)start
{
OSStatus err = kAudioHardwareNoError;
SimpleWaveController *def;
if (isPlaying) return false;
def = self;
err = AudioDeviceAddIOProc(device, appIOProcSWC, (void *) def); //
setup our device with an IO proc
err = AudioDeviceStart(device, appIOProcSWC); // start playing sound
through the device
isPlaying = true; // set the playing status global to true
return true;
}
In the above, I eliminated my error handling code and some other things
which weren't directly related and would be more confusing. Right
now I'd assume my callback can't be the responsible party as it
only reports the AudioTimeStamps and outputs data. Anyway, any
advice appreciated. Oh yes, James McCartney (www.audiosynth.com) wrote the
code on which the above is based.
Thanks,
Ben
On Tue, 9 Oct 2001, Bill Stewart wrote:
>
Have a look at the code in
>
/Developer/Examples/CoreAudio/Services/DefaultOutput/
>
>
There's two pieces of code there - one that uses the
>
AudioConverter, and what the expects the output unit to do the
>
conversion for you
>
>
If you run the code that uses the AudioConverter directly you
>
should see that the host time field marches happily along - in this
>
case it just passes it from the AudioDevice.
>
>
Bill