• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AVFoundation and Core Audio integration for capturing
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AVFoundation and Core Audio integration for capturing


  • Subject: Re: AVFoundation and Core Audio integration for capturing
  • From: Orestis Markou <email@hidden>
  • Date: Sat, 04 Aug 2012 23:01:38 +0300

After some testing, I may have found a solution for syncing the two files - use the first mHostTime in the render notify callback for audio, and the "com.apple.cmio.buffer_attachment.hosttime" attachment value in the first CMSampleBufferRef of the AVCaptureFileOutput delegate callback.

However, I have stumbled upon a show-stopper: Running an AVCaptureSession (even with just the camera, no microphone input/connection) effectively disables my Audio Graph. Is this a known limitation or should I file a bug?

I'm using an external Focusrite Safire 6 USB sound card and the built-in Macbook Pro iSight camera, on Mac OS X 10.8

Any help on this would be greatly appreciated.


On 4 Αυγ 2012, at 2:52 μ.μ., Orestis Markou <email@hidden> wrote:

> Hello,
>
> I have the following use case, for Mac OS X 10.8:
>
> 1. An audio unit graph that mixes audio input and file-based audio and applies real-time audio unit effects. Nor very different from a karaoke backing track that you sing along via a microphone.
>
> 2. A video camera that records video of the person singing.
>
> I need to produce a video file that will combine the video and the audio. I have managed to write those to individual files using an AUGraph render notification and ExtAudioFile APIs and an AVCaptureSession. I don't need more than 30ms accuracy, but it needs to be somewhat guaranteed, i.e., no occasional 300ms offsets because of timing issues.
>
> First question: Can I record the tracks together from a combined API? Perhaps by routing the AUGraph into the AVCaptureSession.
>
> Second question: If there isn't a way to record those tracks together, can I somehow add timing information to them so that I can sync them up manually?
>
> I saw various new APIs added to AVFoundation for 10.8, but it seems they are playback only (e.g. the master clock).
>
> If this isn't the correct mailing list to post this, please direct me to an appropriate one.
>
> Thanks,
> Orestis


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden


  • Follow-Ups:
    • Re: AVFoundation and Core Audio integration for capturing
      • From: Orestis Markou <email@hidden>
References: 
 >AVFoundation and Core Audio integration for capturing (From: Orestis Markou <email@hidden>)

  • Prev by Date: Re: aspiring CoreAudio devs
  • Next by Date: Getting SampleHardwarePlugin to work for default device
  • Previous by thread: AVFoundation and Core Audio integration for capturing
  • Next by thread: Re: AVFoundation and Core Audio integration for capturing
  • Index(es):
    • Date
    • Thread