Re: AudioOutputUnit and timestamps
Re: AudioOutputUnit and timestamps
- Subject: Re: AudioOutputUnit and timestamps
- From: Bill Stewart <email@hidden>
- Date: Sat, 27 Apr 2002 11:42:11 -0700
on 27/4/02 1:29 AM, Kurt Revis wrote:
>
I'm using the DefaultAudioOutput AU to play a sound from a file. I
>
provide the AU an input callback function, using
>
AudioUnitSetProperty(kAudioUnitProperty_SetInputCallback).
>
>
It's basically working fine--the sound plays correctly--but now I want
>
to use the timestamps that my input callback is given. I don't quite
>
understand what they're telling me.
>
>
* My sound format is 16 bits per sample, stereo, 44.1k. This means that
>
there are 4 bytes per frame. The input callback is given a buffer which
>
is 2048 bytes in length, which is thus 512 frames (or 1024 samples).
>
However, in the timestamps that I am given, the only valid field is
>
mSampleTime, and it increases by only 256 each time my callback is
>
called. I would expect 512 (or maybe even 1024, although that's
>
stretching it)--why is it different?
This implies that a sample rate conversion is being done for you. Are you
sure that you've set the input format of the output unit:
AudioUnitSetProperty -> where the kAudioUnitProperty_StreamDescription (set
on input scope and the elementID I would guess of 0) should be getting set.
In /Developer/Examples/CoreAudio/Services there are two versions of doing
this - one where you set the input stream desc and have the output unit do
the conversion for you (it uses an AudioConverter internally). The second
example lets you listen for the stream formats of the device and feeds the
data to the output unit using an AudioConverter explicitly.
As well... Currently, when any conversion is being done by the output unit
it isn't passing through the host time field. We have fixed this in Jaguar.
>
* It doesn't seem that I can translate this sample-based timestamp to a
>
host time. If I run the timestamp through AudioDeviceTranslateTime(),
>
using the correct device, I get a host time which is nowhere near the
>
value that AudioGetCurrentHostTime() returns.
>
>
(I suppose I understand why--the mSampleTime starts at 0 when the AU is
>
started, but the device's sample timebase doesn't. But is there some
>
other way to do what I want?)
By doing the conversion yourself, you'll see the output stream of the device
from the output unit as published by the device. Check the code mentioned
above.
>
The AudioOutputUnit interface is nice and easy to use, but it seems
>
awfully limiting to not be able to find out when a given buffer of data
>
will actually get played. (Not to mention, contrary to the whole
>
CoreAudio philosophy...)
Which is why we've fixed it:)
>
>
Thanks in advance for any information or advice.
>
Bill
>
Kurt Revis
>
email@hidden
>
_______________________________________________
>
coreaudio-api mailing list | email@hidden
>
Help/Unsubscribe/Archives:
>
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
>
Do not post admin requests to the list. They will be ignored.
mailto:email@hidden
tel: +1 408 974 4056
__________________________________________________________________________
"Thousands of years ago, cats were worshipped as gods. We have never
forgotten this."
__________________________________________________________________________
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.