Re: Avoiding color transformations in PNG/UIImage/CGImage ops?
Re: Avoiding color transformations in PNG/UIImage/CGImage ops?
- Subject: Re: Avoiding color transformations in PNG/UIImage/CGImage ops?
- From: Vince DeMarco <email@hidden>
- Date: Fri, 17 Nov 2017 13:36:57 -0800
> On Nov 17, 2017, at 1:28 PM, Rick Mann <email@hidden> wrote:
>
> Nope, I'm definitely looking at pixel data. But I'll try the generic color
> space. I don't know how it chooses device color space when it's created
> absent any particular display-associated context.
Don't use the generic color space use sRGB for both.
Like this instead
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
Vince
>
>
>> On Nov 17, 2017, at 06:54 , Steve Christensen <email@hidden> wrote:
>>
>> It sounds like you're looking at image file data rather than buffers of
>> pixel data. If so then I wouldn't make the assumption that the encoded bytes
>> in two PNG files will be identical for identical images. Depending on how
>> flexible the file format is, then particular parts of the encoded image
>> could be written to different locations in the two files. It seems more
>> reasonable to draw the images to compare into CGBitmapContexts configured
>> identically and them compare the active portions (i.e., width *
>> bytesPerPixel <= bytesPerRow) of the bitmap buffers.
>>
>> As for color space to use, Apple recommends "that you use calibrated (or
>> generic) color spaces instead of device color spaces. The colors in device
>> color spaces can vary widely from device to device, whereas calibrated color
>> spaces usually result in a reasonably accurate color."
>> (https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/DrawColor/Tasks/UsingColorSpaces.html)
>>
>> Steve
>>
>>
>>> On Nov 16, 2017, at 6:31 PM, Rick Mann <email@hidden> wrote:
>>>
>>> I'm trying to write a unit test for some code I wrote that generates one
>>> image from another. In the main app, the source data comes from Open CV as
>>> a buffer of 3 byte-per-pixel elements. My code generates a CGImage. In the
>>> unit test, I load a saved version of one of those images from a PNG file to
>>> UIImage, get at the buffer, pass it to my code, and then compare the result
>>> to a saved version of that same output.
>>>
>>> The saved version is a PNG. I load that, and then get the data buffer using
>>>
>>> let fiData = fi.dataProvider?.data as Data?
>>>
>>> I do a similar thing with the generated CGImage. Then I compare the two
>>> buffers, byte by byte. They are similar, but contain differences (sometimes
>>> more than I would expect). But if I save both as PNG and look at them in
>>> Preview they look identical.
>>>
>>> My guess is something's happening somewhere with color correction. In my
>>> code, I make a CG(bitmap)Context, specifying device RGB color space (should
>>> that be generic?). I don't really know what happens to the PNGs I save and
>>> load.
>>>
>>> Is there a way to ensure the bytes in the buffer are compressed and
>>> decompressed exactly as written?
>>
>
>
> --
> Rick Mann
> email@hidden <mailto:email@hidden>
>
>
> _______________________________________________
>
> Cocoa-dev mailing list (email@hidden)
>
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden