Re: How to capture a video stream
Re: How to capture a video stream
- Subject: Re: How to capture a video stream
- From: Quincey Morris <email@hidden>
- Date: Wed, 05 Dec 2012 21:02:16 -0800
On Dec 5, 2012, at 19:53 , email@hidden wrote:
> Do I need to use CoreMedia to actually get an image from the sampleBuffer?
> Using Xcode, it appears that UIImage is for iOS (this is just a
> supposition). So XCode changes the code to CIImage.
>
> If this is the wrong direction, please point me in the right direction.
>
> The AVFoundation examples seem to all be based on IOS and not MAC for the
> Media Steaming. Therefore those examples don't work for what I am trying
> to do.
You're correct that the Mac version of the docs haven't been changed from the iOS version, but luckily there is (I think) very little difference you need to take into account.
Assuming you have by now got your data capture delegate set up properly, you need to have it do the equivalent of this:
https://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/05_MediaRepresentations.html
under the final heading "Converting a CMSampleBuffer to a UIImage". In that sample code, you should only have to change one line from its iOS version:
UIImage *image = [UIImage imageWithCGImage:cgImage];
to the corresponding Mac version:
NSImage *image = [NSImage imageWithCGImage:cgImage];
What happens next depends on what you want to do with the image. If you want to display it, you can simply draw this NSImage object. If you want to write each frame to a separate file, you could use a NSImage method to get the TIFF representation as a NSData object and write that out. If you need to convert the image to a specific form, you will likely have to start creating NSImageRep objects (or a subclass like NSBitmapImageRep), but the details will depend on where you're trying to get to.
Note that in some scenarios, you could choose to work with the cgImage directly, rather than creating a NSImage (which is, loosely, just going to function as a wrapper around the CGImage, at least to begin with). The advantage of using NSImage is that it's often less lines of code to use than CGImage (and it's an Objective-C rather than a C API).
P.S. The intermediary code uses CVPixelBuffer and CGImage objects, not CIImage objects. CIImage objects are something else entirely -- they're image containers to which image filters can be applied -- that is, they're essentially image transformations. If, for example, you wanted to apply a Gaussian blur to every frame, you could use CIImage objects to do this.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden