NSImage* image = [[NSImage alloc] initWithSize:rect.size];
[image lockFocus];
CGContextDrawImage([[NSGraphicsContext currentContext]
graphicsPort], *(CGRect*)&rect, imageRef);
[image unlockFocus];
Looks good, but it's not exactly what I'd call 'conversion' but more of a
'duplication'. As I understand it, the CGImage and NSImage will not
share the same pixel memory, so depending on your uses this may be a big
problem.
Actually that concept really doesn't mesh with the idea of a CGImage to begin with. A CGImage represents a rectangular array of pixels, but there is no reason to expect that it has an explicit memory buffer that it might share with an NSImage.
For example, if you create a CGImage using CGImageCreateWithJPEGDataProvider the resulting CGImage may, or may not have a pixel buffer associated with it. That is an implementation detail of CGImage.
Pure speculation on my part, but I can imagine that in more up-to-date uses of CGImage, there is also the possibility that, if it has one, the CGImage has moved it's pixel buffer to the video card. If this is the case then having an NSImage pointing at the place where the pixel buffer USED to be.
Finally, the image content of an NSImage is mutable (through lockFocus) while that of a CGImage is not.
These are pretty strong reasons that CGImage keeps the pointer to it's image buffer to itself.
Actually, are the pixel values even guaranteed to be the same or
does the process of 'drawing' (ie CGContextDrawImage()) do some colour
conversion?
That certainly seems possible.