Re: Protocol for processing audio data from input and writing to file
Re: Protocol for processing audio data from input and writing to file
- Subject: Re: Protocol for processing audio data from input and writing to file
- From: Joey Green <email@hidden>
- Date: Wed, 30 Jun 2010 08:19:30 -0500
Quick question. If I'm wanting to do audio effects do I need to take my audio file CAF and run through an audio unit( probably a remoteIO and have it play out to mic after doing the effect proccessing in the callback) ? Or can I do it in Audio Queue somehow? I haven't seen any tutorials where you can process the buffers of an Audio Queue. I figure you would do it in the buffer is full callback, maybe.
On Tue, Jun 22, 2010 at 6:43 PM, William Stewart
<email@hidden> wrote:
Ah... well, there is but you would use an AudioQueue to do this.
Basically you create an audio queue for input, some buffers, etc... then start it. You start metering, and as the queue feeds you buffers, ignore them (don't write them to a file). When you cross the threshold, start writing these to the file. Then keep metering, when you are done, stop the queue, close the file, etc...
The example (aqrecord), shows you a pretty clean way to do this (but its alot more code than AVAudioRecorder)... You could write a feature request that we could fold something like this into AVAR (kind of start it, but don't write to the file).
On Jun 22, 2010, at 11:56 AM, Joey Green wrote:
I implemented a solution last night using AVAudioRecorder and Player like you suggested, but I didn't mention one part of what I'm trying to do which I'll add here.
I'm trying to do something similar to what you would see in the Talking Carl iPhone app. Where the user will talk into the microphone, the app will process the recording and then output to mic. So, what I forgot to mention is the when to start recording and stop recording part to this application. The way I'm figuring that out is through the decibel level. If it's say above -20 I start recording once it goes below I stop recording. The only way( that I know of ) to capture the decibel level using AVAudioRecorder is using the metering ability, which I did. The problem is that you need to be recording for you to get the metering level. So I have some hacked up code where you start recording immediately when the app starts and once you reach that decibel level you delete the previous recording and start a new one that will actually use.
This works, but it's kind of hacky. I'm sure there is a better way for what I'm wanting to do.
On Mon, Jun 21, 2010 at 5:07 PM, William Stewart
<email@hidden> wrote:
If you aren't doing a real-time process, then I would use AVAudioRecorder and Player
On Jun 21, 2010, at 2:49 PM, Joey Green wrote:
> I'm wanting to take in audio data from mic, add some effect to the audio data and then spit the processed audio data back out to the speakers. I have thought of a couple ways to do this, but I'm not sure if any would work.
>
> Also, I think my processing will be too slow to process on the fly in or out
>
> Scenario one
> 1. Read input from mic with remoteio AU
> 2. add my effect to the audio data
> 3. save the audio data in a buffer
> 4. once I'm done, read back my buffer and run it through another remoteio to play through mic
>
>
> Scenario two
> 1. Read input from mic using audio queue
> 2. once I'm done, run the audio queue through an effect AU and save it out to another buffer
> 3. once done run that buffer through remoteIO AU or an audio queue
>
>
> Am I thinking of this correctly? Should I be trying something else?
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (
email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden