Questions on An NSOpenGL Application Design
Questions on An NSOpenGL Application Design
- Subject: Questions on An NSOpenGL Application Design
- From: "Carmen Cerino Jr." <email@hidden>
- Date: Mon, 29 Sep 2008 22:03:13 -0400
When my application starts up, the user is presented with a settings
window. It contains a view that will be attached to a web camera, and
some widgets to control various filter settings. Once the settings are
tweaked to the user's desire, the window will be closed, but the
camera will still be processing images. In addition to standard
CIFilters, I will also need to read the pixels back in from VRAM to
perform an analysis on the CPU that I have yet to transform into a
CIFilter. The way I plan on designing this application is to have an
NSOpenGLView subclass to display my camera feed, and another class to
control the camera and all of the image processing.
Questions:
1. Should I stick with this design path? Some of the sample code I
have seen puts what I have broken down into two classes all in the
NSOpenGLView subclass, ie CIVideoDemoGL.
2. If I leave the code in two seperate files, do I need two
OpenGLContexts, one for the view and one to link to a CIContext for
the image filters, or can I just use the one from the NSOpenGLView?
3. When I bring the images back in from the GPU they will already be
rendered with the CIFilters, so is it worth it to push them back out
to OpenGL for drawing?
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden