What am i trying to do?
The app will show the video thumbnails to the users to select a certain range of frames from the video file.
I have to Apply the ramp slow Mo, only to the selected duration or frames of the video.
Ramp Slow Mo: (Time Stretching + Pitch shifting) dynamically. I have to vary the Time and pitch in a for loop say for some 10 iteration. Its nothing but the behaviour of English letter "U". So, the i will increase the time and pitch and decrease them.
How am I trying:
1. Finding the Start/End time for the user selected videos.
2. Splitting the video and audio separately through AVFoundation Mutable Composition.
3. For Video: Applying Time Stretching for only for the duration in between Start/End time. No problem in it,Its happening as i expect.
4. For Audio: Doing Time Stretching and Pitch shifting dynamically, which is also for the selected time frame. I am using Dirac for this. Now, i would like to use IOS SDK itself.
5. Merging audio and video post slow mo as a file and storing it.
Hiterto, I've found that OpenAl or Audio Units(Using AUVarispeed alone or AUTimePitch and Time Rate together) can help.
After splitting the video and audio, i will have the audio URL to proceed further, either with OpenAL or Audio Units.
My questions:
1. Can anybody help me to avoid Dirac(ref bullet 4)? So how to Time Stretching and Pitch shifting dynamically?
2. How to do only for the selected frames (in between start and end time)?
2. I am not sure how to combine AVFoundataion's URL with OpenAL or Audio Unit's buffer and after slow mo effect, how to merge finally with AVFoundation's video reference URL?
Any kind of reference or sample codes helps a lot.