I'm writing an AudioPlayer in iOS5 that needs these features:
1. left and right channel independent volume controls (and mute) 2. left and/or right channel fade - when audio playback is stopped, optionally fade the playback over 2 seconds 3. user configurable stop point - automatically stop playback when a point in the file is reached 4. scrubber
I'm trying to decide between using the new AudioFilePlayer (one panned left, one panned right, run through a mixer), and using a mixer with an input render callback (after loading the audio into memory).
I have the file player version working, but I don't know how to get the scrubber working and there seems to be a bug in the 5.1 OS where stopping the graph and restarting it after some time doesn't restart. So, I'm down to the last option, but I'm concerned about memory issues since it is iPhone and the files are supplied by the user, so they could be large.
Questions:
1. I read through Chris and Kevin's book, and it seemed to indicate that you could manually read from a file and feed the data to an Audio Unit, but maybe I'm misunderstanding. Is that possible? Advisable? Would that be the way to alleviate the memory concern on iPhone?
2. Is there a way using the AudioFilePlayer Unit to monitor the playback time and update a scrubber? And then seek to a new playback time if the user changes the scrubber?
3. Using Tom Zicarelli's AudioGraph sample ( https://github.com/tkzic/audiograph), if I turn on the filePlayer input and hit "Play", then "Stop", wait a few seconds and hit "Play" again, the input level views continue, but sound does not. Is this a bug I should file a radar on or am I misunderstanding something?
|