Re: AVFoundation and Core Audio integration for capturing
Re: AVFoundation and Core Audio integration for capturing
- Subject: Re: AVFoundation and Core Audio integration for capturing
- From: Orestis Markou <email@hidden>
- Date: Sun, 05 Aug 2012 14:01:08 +0300
For anyone interested in this, the problem described below was caused due to insufficient USB isochronous bandwidth. Thankfully a console message mentioned that. Plugging my sound card to the other USB port resolved the issue (looks like the original USB port shared the USB bus with iSight).
I'd very interested in hearing if there other ways to mix core audio and AVFoundation recordings. My proposed method seems to work, but it feels like unnecessary post-processing.
On 4 Αυγ 2012, at 11:01 μ.μ., Orestis Markou <email@hidden> wrote:
> After some testing, I may have found a solution for syncing the two files - use the first mHostTime in the render notify callback for audio, and the "com.apple.cmio.buffer_attachment.hosttime" attachment value in the first CMSampleBufferRef of the AVCaptureFileOutput delegate callback.
>
> However, I have stumbled upon a show-stopper: Running an AVCaptureSession (even with just the camera, no microphone input/connection) effectively disables my Audio Graph. Is this a known limitation or should I file a bug?
>
> I'm using an external Focusrite Safire 6 USB sound card and the built-in Macbook Pro iSight camera, on Mac OS X 10.8
>
> Any help on this would be greatly appreciated.
>
>
> On 4 Αυγ 2012, at 2:52 μ.μ., Orestis Markou <email@hidden> wrote:
>
>> Hello,
>>
>> I have the following use case, for Mac OS X 10.8:
>>
>> 1. An audio unit graph that mixes audio input and file-based audio and applies real-time audio unit effects. Nor very different from a karaoke backing track that you sing along via a microphone.
>>
>> 2. A video camera that records video of the person singing.
>>
>> I need to produce a video file that will combine the video and the audio. I have managed to write those to individual files using an AUGraph render notification and ExtAudioFile APIs and an AVCaptureSession. I don't need more than 30ms accuracy, but it needs to be somewhat guaranteed, i.e., no occasional 300ms offsets because of timing issues.
>>
>> First question: Can I record the tracks together from a combined API? Perhaps by routing the AUGraph into the AVCaptureSession.
>>
>> Second question: If there isn't a way to record those tracks together, can I somehow add timing information to them so that I can sync them up manually?
>>
>> I saw various new APIs added to AVFoundation for 10.8, but it seems they are playback only (e.g. the master clock).
>>
>> If this isn't the correct mailing list to post this, please direct me to an appropriate one.
>>
>> Thanks,
>> Orestis
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden