Help manipulate pixel data with Core Image and Core Video etc
Help manipulate pixel data with Core Image and Core Video etc
- Subject: Help manipulate pixel data with Core Image and Core Video etc
- From: Erik Buck <email@hidden>
- Date: Fri, 27 May 2005 23:17:07 -0400
Here is my situation:
I have a pre-Tiger framework of classes that handle QuickTime image
and movie import and dynamic conversion to OpenGL textures. This
framework basically allows loading any image format supported by
QuickTime and using it as a texture. It also allows playing
QuickTime movies as dynamic textures using glTexSubImage2D() and
friends.
I think the new Core Image and Core Video capabilities supplant my
pre-Tiger framework and add vastly more features with vastly less
code for me to maintain. Code that uses pre-Tiger Quicktime is
really ugly too.
However, I have reached my first stumbling block. I have
applications that load color images, convert the images to shades of
gray, and then read the pixel data directly to create elevation maps.
I would like to transition to using new Core* frameworks instead of
my own. I have even seen references in CIImage documentation about
elevation maps, but I can not figure out how to perform the same
operations with Core Image.
What is the trick to doing the following with Core Image et. al. ?
1) Load an arbitrary image in arbitrary format from a URL or path
2) Convert the image to elevation map (an array of float values where
each float value represents the brightness (from 0.0 to 1.0) of one
pixel in the image)
If either step requires multiple sub-steps, that is fine.
Thanks in advance for any pointers to information :)
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden