I asked this some time ago, and didn't get a response. Forgive me if this is really simple...
I'm getting into Core Audio, and a one thing is perplexing me. I'm playing a generated sound via the default Audio Unit. I set a render procedure via kAudioUnitProperty_SetRenderCallback which then calls AudioConverterFillComplexBuffer to convert from the native format I'm using to whatever I'd like to play. This all works pretty well with one exception: how do I know when the sound has finished playing?
I can attempt to count samples after the call to AudioConverterFillComplexBuffer and translate that into a time, but how do you even determine when the sound started to play? The render procedure has as an input a AudioTimeStamp, but the documentation says that is the current timestamp, which is really useless.
It's important to not terminate playback before all samples are finished, and also important to not play too long. How can this be done? I've spent several hours googling and searching through the mailing list archives, to no avail.
The sounds in question can range from a second to several minutes long.
-Norman
|