Re: write(2) fails for large sizes in 64 bit applications
Re: write(2) fails for large sizes in 64 bit applications
- Subject: Re: write(2) fails for large sizes in 64 bit applications
- From: Robert Homann <email@hidden>
- Date: Tue, 2 Jun 2009 14:25:20 +0200 (MEST)
On Fri, 29 May 2009, Jeremy Pereira wrote:
Hello!
> I think the behaviour is probably correct in that the OS X kernel is a
> *32 bit* mach-o object, at least it is on my 10.5.7 Core 2 Duo Macbook
> Pro.
Yes, considering that the kernel is a 32 bit one, I think it is OK for
some 32 bit limits to apply internally if the resulting behavior is
still reasonable for 64 bit processes.
The INT_MAX limitation of write() is correct for 32 bit processes, I
would say, and I would expect to see some error when trying to pass
larger values. A 64 bit process, however, should not see an error for
large values, regardless of the underlying kernel implementation.
Instead, write() should write the whole buffer, or just as much as the
internal limit allows to - in this case INT_MAX - and return the
number of bytes written to the caller. This is what I would have
expected and what the standard suggests.
> write(2) is a system call. The implementation is inside the kernel,
> not in any user space library linked with your application.
> Internally, it is going to be subject to 32 bit limits and personally
> I think that is entirely reasonable.
>
> Whilst I admit the kernel could be changed to cope with larger
> buffers, I think it is a bit harsh to criticise it because the largest
> block it can write to a file in one go is a mere 2Gb, especially
> considering some other 32 bit kernels can't run 64 bit code at all.
What I am suggesting is to push write() closer to the standard, thus
making porting applications to Mac OS X easier. It is not hard to
accomplish in this case, so it should be done, no? (I haven't checked,
but read() may expose similar behavior, so it should also be adapted
for symmetry.)
In case someone is interested: This whole issue was raised by an
application that computes a large table in a malloc()'ed block (larger
than 2GB), and dumps it to file using write(). I didn't expect any
problems on OS X since I had no trouble on other systems. Now I have
changed the program so to limit the number of bytes written at a time.
It doesn't hurt much, but it took me some time to track down the
problem.
Best regards,
Robert Homann
--
Windows is not the answer.
Windows is the question.
The answer is "No".
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden