Re: Fast hash of NSData?
Re: Fast hash of NSData?
- Subject: Re: Fast hash of NSData?
- From: Kyle Sluder <email@hidden>
- Date: Sun, 01 Dec 2013 09:51:16 -0800
>> On Dec 1, 2013, at 8:00 AM, Scott Ribe <email@hidden> wrote:
>
>
>> On Dec 1, 2013, at 8:36 AM, Graham Cox <email@hidden> wrote:
>>
>> Scanning my entire hard drive (excluding hidden files), which took several hours, sure I had plenty of collisions - but absolutely no false ones - they all turned out to be genuine duplicates of existing files. This is using the FNV-1a 64-bit hash + length approach.
>
> I have a drive sitting here that has a few *million* image files; I'd be willing to bet zero collisions.
That’s all well and good, but why are we still debating this approach when Mike Abdullah has posted a far superior scheme with an infinitely faster hash (file size), *zero* risk of data loss, and less performance penalty than using a low-collision hashing algorithm?
I still find it unconscionable to handle the user’s data with any scheme that could potentially lose it under normal operation, regardless of how infinitesimal the chance. You just don’t do it, *especially* not silently! Just because git does it doesn’t make it okay.
--Kyle Sluder
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden