do priming and padding frames need to be considered when initializing ScheduledAudioFileRegion for use with AUAudioFilePlayer ?
do priming and padding frames need to be considered when initializing ScheduledAudioFileRegion for use with AUAudioFilePlayer ?
- Subject: do priming and padding frames need to be considered when initializing ScheduledAudioFileRegion for use with AUAudioFilePlayer ?
- From: Andy Davidson <email@hidden>
- Date: Sun, 09 Sep 2012 13:10:31 -0700
- Thread-topic: do priming and padding frames need to be considered when initializing ScheduledAudioFileRegion for use with AUAudioFilePlayer ?
Hi
I recently started playing around with the AUAudioFilePlayer. To keep things simple I did my initial testing using LPCM file. I have basic play, pause, stop implemented for multiple regions. Now I want start testing with compressed formats.
I assume the AudioUnitFilePlayer does not pass priming or padding frames when rendering. Is this correct?
When my user presses "pause" I call
SCICheckError(AudioUnitGetProperty(_filePlayerAU,
kAudioUnitProperty_CurrentPlayTime,
kAudioUnitScope_Global,
0,
¤tTime,
&size),
Will the returned value have = priming frames + song frames + padding frames ? (Do I need to take this into consideration when configuring ScheduledAudioFileRegion mTimeStamp.mSampleTime ?
How should mStartFrame and mFramesToPlay be set? For example if I wanted to play entire song I assume mStartFrame = 0 should nFramesToPlay = packetCount * mFramesPerPacket? Do I need to subtract the padding frames?
If I want to schedule a region to play should mStartFrame = padding frames + #framesFor(endTimeInSeconds – startTimeInSeconds)
Thanks in advance
Andy
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden