• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Help manipulate pixel data with Core Image and Core Video etc
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Help manipulate pixel data with Core Image and Core Video etc


  • Subject: Re: Help manipulate pixel data with Core Image and Core Video etc
  • From: Scott Thompson <email@hidden>
  • Date: Sat, 28 May 2005 09:01:09 -0500


On May 27, 2005, at 10:17 PM, Erik Buck wrote:

Here is my situation:

I have a pre-Tiger framework of classes that handle QuickTime image and movie import and dynamic conversion to OpenGL textures. This framework basically allows loading any image format supported by QuickTime and using it as a texture. It also allows playing QuickTime movies as dynamic textures using glTexSubImage2D() and friends.

I think the new Core Image and Core Video capabilities supplant my pre-Tiger framework and add vastly more features with vastly less code for me to maintain. Code that uses pre-Tiger Quicktime is really ugly too.

However, I have reached my first stumbling block. I have applications that load color images, convert the images to shades of gray, and then read the pixel data directly to create elevation maps. I would like to transition to using new Core* frameworks instead of my own. I have even seen references in CIImage documentation about elevation maps, but I can not figure out how to perform the same operations with Core Image.

What is the trick to doing the following with Core Image et. al. ?
1) Load an arbitrary image in arbitrary format from a URL or path
2) Convert the image to elevation map (an array of float values where each float value represents the brightness (from 0.0 to 1.0) of one pixel in the image)


If either step requires multiple sub-steps, that is fine.

Thanks in advance for any pointers to information :)

I don't know that Core Image supports what you are trying to do directly. As a general rule of thumb, Core Image is helpful for running calculations and displaying them on the screen, but it is not helpful as a generalized toolkit for manipulating image data. This is because Core Image likes to ship as much of the computation as it can off to a graphics card and, for the most part, getting the results of those calculations back into main memory is "slow".


My feeling, therefore, is that you will probably find it easiest to use Core Graphics (i.e. Quartz 2D), Image I/O, and maybe vImage to solve your problem:

Loading an arbitrary image with Image I/O is almost painfully simple. You create an image source (e.g. with CGImageSourceCreateWithURL) then ask it for the particular CGImage at an index in that image source (e.g. with CGImageSourceCreateImageAtIndex).

Getting that data into an elevation map may be tricker. I'm not sure if Core Graphics supports drawing into grayscale bitmaps with floating point pixels. You would have to try creating a grayscale, floating point offscreen using CGBitmapContextCreate and telling it you want to use a grayscale color space (probably the generic grayscale space) and floating point pixels. If that succeeds then the conversion will involve little more than CGContextDrawImage. (while you're at it, send in a bug report asking that Technical QA 1037 be updated for Tiger).

If Core Graphics is unable to handle the transformation, then you might look at using vImage to handle the conversion. I believe the routine vImageConvert_ChunkyToPlanarF is just what you want (since chunky and planar in grayscale colorspaces are one and the same). In order to use it, you're still going to have to get ahold of the CGImage's image data. That will involve creating a CGBitmapContext (as before with the generic grayscale color space. This time with 8 bits per pixel). You can then draw your color image into that grayscale space (CGContextDrawImage) and you will have a grayscale image buffer that you can use with vImage to create the floating point image.

Scott
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Help manipulate pixel data with Core Image and Core Video etc (From: Erik Buck <email@hidden>)

  • Prev by Date: Re: NSData hexadecimal to 16 bit integer?
  • Next by Date: Using [NSWorkspace getFileSystemInfoForPath:...]
  • Previous by thread: Help manipulate pixel data with Core Image and Core Video etc
  • Next by thread: CoreData apps & applicationShouldTerminate
  • Index(es):
    • Date
    • Thread