Since as of iOS 7, the C API has been deprecated, how do we use AVAudioSession to get live microphone data using remote I/O unit, and process or read it on the fly for a mic based application?
>From what I understand, AVAudioRecorder only records the data to a local file.
I was to get access to the audio buffer on the fly and process live on the app.
I haven’t managed to find any sample code that uses remote I/O audio unit with AVAudioSession API.
Could any please point me towards the right resources?
Don't confuse the 'Audio Session' with the API's that actually allow you to work with audio data itself. To use our terminology the Audio Session simply "sets the audio context for your app.". It doesn't actually 'play' or 'record' anything.
The C Audio Session API's are indeed deprecated but all the functionality (and more) has been moved into AVAudioSession(.h) and the transition is quite trivial. AVAudioSession has been available for a long time and it was over 2 years ago we told folks to slowly transition away from the C API, only using it if something wasn't yet available in AVAudioSession. That transition period is now done with iOS 7.
Regarding the RemoteIO - As Daniel mentioned, the aurioTouch2 sample is a place to start, unfortunately it's lagged behind in being updated so yes it's still using the C Audio Session API's but if you're trying to learn how to use the RemoteIO, Audio Session isn't really directly related so you can ignore the deprecation warnings, check out how the RIO is used then take this knowledge and write your own app. using AVAudioSession.
A couple of things to note about aurioTouch2:
I) it's way more complicated than it needs to be due to some older FFT code and extra audio format conversions and not being updated for iOS7 yet - we know and we'll fix this in the next completely new version.
2) it performs input by calling AudioUnitRender for the input on the output render proc. It doesn't actually use an input proc. So in terms of the question being asked here, for folks *just* wanting to do input only, you need to change the way the unit is setup from what is shown in the sample. Code seems to be available readily in the community for this with a bit of searching.
3) by just commenting out a bit of code in the render proc., you can completely bypass all the extra conversion, FFT, OGL stuff, turning the sample into a simple "Thru" box allowing even a beginner with the RIO to experiment.
Back to AVAudioSession - the API Reference does list some other samples that have been updated to use AVAudioSession if you want some example usage of this object.
And I've pretty much updated all our Q&A's that use to mention the old C API to now discuss AVAudioSession along with some expanded information added. Just search for AVAudioSession in the iOS reference library:
Finally, don't forget the WWDC videos where Audio Session has pretty much been covered in depth over the last few years (even if some of those older session talk about the C APIs, the concepts are important and still useful) and any suggestions for content that would be helpful to add or update in the Audio Session Programming Guide should be filed as bugs <bugreport.apple.com> for the Documentation folks to sort out.
Hope some of this is helpful,
edward
|