Re: write(2) fails for large sizes in 64 bit applications
Re: write(2) fails for large sizes in 64 bit applications
- Subject: Re: write(2) fails for large sizes in 64 bit applications
- From: Terry Lambert <email@hidden>
- Date: Tue, 02 Jun 2009 11:23:13 -0700
On Jun 2, 2009, at 5:37 AM, Quinn <email@hidden> wrote:
At 14:25 +0200 2/6/09, Robert Homann wrote:
What I am suggesting is to push write() closer to the standard, thus
making porting applications to Mac OS X easier.
A laudable goal. Have you filed a bug about this? If not, please
do. Our kernel engineers will need a bug on file before they take
any action on this.
<http://developer.apple.com/bugreporter/>
Correct. I'd say it more forcefully.
Our change control process will not allow us to make arbitrary changes
or commit time to doing work without some type of bug report or
project attachment. It's very customer focussed. People can complain
about things on mailing lists all they want, but unless someone writes
up a bug, that's all they've done.
An Apple engineer could see the complaint on a mailing list and write
up a bug themselves, but that's going to have a lower priority than if
it came from an outside developer report, particularly if it was
standing in the way of the developer shipping a product. They might
also opportunistically put in a fix while fixing something else
related for which they already had a bug or project mandate. Neither
one of the Apple unilateral actions is going to guarantee the
developer will be satisfied with the fix, since they won't be the ones
marking it verified, and the helpful engineers understanding of the
original complaint may be imperfect.
Assuming a new bug report, if an Apple engineer wrote this up, it'd be
a P4 "Nice to have"; if a customer wrote it up, it'd be a P3
"Important", unless they could demonstrate that they had already
shipped binary-only product, and the only thing that started
triggering it was a change in data set size reasonably expected to be
encountered by a lot of customers, at which point it would likely
become P2 "Expected".
This particular problem itself, though, is a bad example of something
to argue passionately about, since the cause is understood, it's
easily avoided, the operations in question are practically absurdly
large, and the consequences of making the operations "work" will be
large I/O stalls as the I/O request gets saturated at the disk, whose
firmware doesn't allow the kernel driver to reorder in-flight requests
based on later higher priority requests (i.e. the hardware itself is
uncooperative with background I/O).
Basically, the application is not playing nice with other applications
or commodity disks.
While it's also possible to "solve" the problem by shorting writes, as
has been suggested, this is less satisfying aesthetically, from an
extent allocation policy perspective (assuming disk writes), and
pragmatically, based on experience of programmers only checking write
return values for errors, rather than size. To avoid the I/O stall
this clamp would also need to be rather small; it's hoped that the
programmer would do this voluntarily, unless their application needed
the I/O to all go at once, in which case we do what they tell us to
do, to the best of our ability. If that's not a requirement, then this
is back to a P3, with a strong hint that it should be a P4 based on it
not being a problem in the first place if the application was a good
citizen that had already been written to "play well with others" by
voluntarily limiting its own I/O sizes.
-- Terry
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden