Re: Fast hash of NSData?
Re: Fast hash of NSData?
- Subject: Re: Fast hash of NSData?
- From: Scott Ribe <email@hidden>
- Date: Sun, 01 Dec 2013 16:05:20 -0700
On Dec 1, 2013, at 11:15 AM, Kyle Sluder <email@hidden> wrote:
>> On Dec 1, 2013, at 10:01 AM, Scott Ribe <email@hidden> wrote:
>>
>>> On Dec 1, 2013, at 10:51 AM, Kyle Sluder <email@hidden> wrote:
>>>
>>> I still find it unconscionable to handle the user’s data with any scheme that could potentially lose it under normal operation, regardless of how infinitesimal the chance.
>>
>> Then you can use neither a computer nor a hard drive to handle the user's data ;-)
>
> I get the joke, but I’m not talking about when things break...
It's not a joke. Both computers and hard disks have the property that in normal operation there's a statistically (hopefully) insignificant chance of bits flipping, which chance can be reduced arbitrarily, depending on engineering trade-offs, but simply cannot be eliminated.
Hard disks, since the days of GMR, read an extremely faint and noisy signal and do lots of signal processing to come up with the most statistically likely set of bits out of that noise. Then that data is checked against the checksum, and corrections applied if indicated. In the last generation of 512-byte block hard disks, the read error rate had crept up; 4k disk blocks have longer checksums, which helps. I spent way more time with google than I should have on this, but was unable to come up with typical raw read error rates for either type of drive. (Back in the "good old days" manufacturers actually put both soft and unrecoverable error rates in their specs--but not anymore; I wonder why...) But the unrecoverable error rate, after using the 100-byte checksum, is around 1 in 10^15 bits read, for "enterprise" drives, and more like 1 in 10^14 bits read in crapsumer drives--with drives currently packing over 10^13 bits and well on their way to 10^14 bits!
At any point where data is shipped between two different clocks, there is some finite risk of getting stuck in a meta-stable state between on & off on each transition rather than flipping--meaning there's a risk, that can never be eliminated, of RAM & CPU errors. As with hard disk soft read errors, information on actual probabilities seems hard to find...
But anyway, no joke, just instead pointing out that ***EVERYTHING*** in computing is based on this kind of analysis--it's a simple fact that if you can't be comfortable with a risk of data loss that is many many orders of magnitude less than the risk of humanity being wiped out by an asteroid, then you just have to close your eyes and ignore the physics on which it is all built...
--
Scott Ribe
email@hidden
http://www.elevated-dev.com/
(303) 722-0567 voice
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden