Re: AVAudioPlayerNode - Mix between schedule buffers and segments
Re: AVAudioPlayerNode - Mix between schedule buffers and segments
- Subject: Re: AVAudioPlayerNode - Mix between schedule buffers and segments
- From: Vincent CARLIER <email@hidden>
- Date: Mon, 03 Apr 2017 14:44:35 +0200
More details on the way I compute positions in audio files :
First I get the file processing format. Next, for each start/end of audio sections in my file, I compute a frame position like so:
position = AVAudioFramePosition(time * format.sampleRate)
where time is a value in seconds.
Now I will break my audio sections in 3 parts :
[.buffer ] - [ segment ] - [ buffer ]
Start and end buffers are of length 1024, and I allocate them with that size.
Then I compute a frame start position for the segment : segmentStart = sectionStartPosition + 1024
I do the same for the end frame position of the segment : segmentEnd = sectionEndPosition - 1024
I deduce the length in frames of the segment being : AVAudioFrameCount(segmentEnd - segmentStart)
Is that right ?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden