Mailing Lists: Apple Mailing Lists
Image of Mac OS face in stamp
Re: CV tutorials
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CV tutorials



Hi Steve,

I found a good, clean example of grabbing frames from CoreVideo and
sending them to OpenGL using PBOs. The project is currently designed
to show how to integrate CoreVideo with OpenCV, but the OpenCV is just
another step in the process. Maybe you'll find it helpful.

CVOCV
http://blog.buzamoto.com/2008/12/26/cvocv/


On Fri, Feb 20, 2009 at 5:28 PM, Steve Wart <email@hidden> wrote:
> Sorry please disregard my previous post. I finally realized that I've been
> going about this all wrong.
>
> First, the display link is completely unnecessary as I am taking frames from
> the capture device, not from a movie source file. It's coming back null
> because it has nothing to give me. I need to remove all references to the
> display link from my project.
>
> Second, I've been confused about the fact that my texture is failing to bind
> is because I'm handing it a CVImageBufferRef while the movie capture
> QTVisualContextCopyImageForTime also returns a CVImageBufferRef. Looking
> more closely at the QTKit frame capture example, the image buffer is
> converted into an image as follows
>
>         NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage
> imageWithCVImageBuffer:imageBuffer]];
>         NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]]
> autorelease];
>
>         [image addRepresentation:imageRep];
>
> Presumably if I just want to create a texture with this, I can use
> CIImage>>imageWithCVImageBuffer: and forget about creating an NSImage. This
> is actually really cool since it will let me process the image with CI
> instead of using the deprecated QuickDraw code I've been puzzling over. I
> need to do some chroma key processing to set the alpha value based on a
> range of HSV values - does anyone have any suggestions about how to do this?
>
> Please let me know if you have any pointers about managing these buffers
> efficiently. In particular I want to put as much load onto the GPUs as
> possible (we will probably have two when we start shooting) and we have 8
> cores to play with.
>
> Cheers,
> Steve
>
> On Thu, Feb 19, 2009 at 8:23 PM, Steve Wart <email@hidden> wrote:
>>
>> Thanks vade. I set things up using the QTCaptureVideoPreviewOutput and
>> tried to use the display link to obtain the video data, but I'm a bit
>> puzzled about using QTVisualContextCopyImageForTime to grab my image buffer.
>>
>> The problem is that QTVisualContextIsNewImageAvailable returns false and
>> QTVisualContextCopyImageForTime always returns 0. Can you explain how to use
>> a CVTimeStamp to capture live video?
>>
>> The video output callback gives us the image without needing a timecode,
>> but it's still not in a format I can bind a texture to. A colleague gave me
>> some code yesterday evening that uses the QuickTime video digitizer
>> components, but it's also using deprecated QuickDraw functions to process
>> the image prior to making a texture (but I guess it would be no big deal to
>> migrate that).
>>
>> Is this something I need to be using the VD library for?
>>
>> Steve
>>
>> On Wed, Feb 18, 2009 at 5:55 PM, vade <email@hidden> wrote:
>>>
>>> Your best and easiest bet is to use the QTCaptureVideoPreviewOutput
>>> object rather than the DecompressedVideoOutput object, and set up a visual
>>> context via QTOpenGLTextureContextCreate() that is shared with your
>>> applications main OpenGL context, and then use QTCaptureVideoPreviewOutput
>>> to associate to your QTOpenGLTextureContext via its setVisualContext:
>>> method. You can then use QTVisualContextCopyImageForTime and friends to grab
>>> a frame from your VideoPreviewOutput as a CVOpenGLTextureRef in your timer
>>> or CVDisplayLinkCallback function, and pass it to GL nice and easy using the
>>> CVOpenGLTexture functions to get the clean coords, flipped, and texture ID.
>>> Or, well, in theory :)
>>>
>>>
>>> On Feb 18, 2009, at 7:37 PM, Steve Wart wrote:
>>>
>>>> I'm attempting to combine some of the ideas in QTCoreVideo102..202
>>>> series and the Still Motion Capture application in the QTKit Capture
>>>> Programming guide.
>>>>
>>>> Essentially I want to capture live video from a camera and use it as a
>>>> texture in OpenGL.
>>>>
>>>> Instead of using a timer to capture the frame from the display link
>>>> callback, I've got a thread that grabs a CVImageBufferRef from the
>>>> QTCaptureDecompressedVideoOutput object 30 times a second. It successfully
>>>> stores this into my delegate, but when I try to turn it into a texture
>>>> CVOpenGLTextureGetTarget() returns a target of 0, causing glBindTexture() to
>>>> fail with GL_INVALID_ENUM.
>>>>
>>>> Do I need to do some additional processing on the captured image in
>>>> order to use it as an OpenGL texture?
>>>>
>>>> Steve
>>>> _______________________________________________
>>>> Do not post admin requests to the list. They will be ignored.
>>>> QuickTime-API mailing list      (email@hidden)
>>>> Help/Unsubscribe/Update your Subscription:
>>>>
>>>> This email sent to email@hidden
>>>
>>
>
>
>  _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> QuickTime-API mailing list      (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
>
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
QuickTime-API mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >CV tutorials (From: Steve Wart <email@hidden>)
 >Re: CV tutorials (From: vade <email@hidden>)
 >Re: CV tutorials (From: Steve Wart <email@hidden>)
 >Re: CV tutorials (From: Steve Wart <email@hidden>)



Visit the Apple Store online or at retail locations.
1-800-MY-APPLE

Contact Apple | Terms of Use | Privacy Policy

Copyright © 2011 Apple Inc. All rights reserved.