Re: Optimizing writes of big files for specific hardware?
Re: Optimizing writes of big files for specific hardware?
- Subject: Re: Optimizing writes of big files for specific hardware?
- From: Jay Reynolds Freeman <email@hidden>
- Date: Fri, 03 Jul 2009 14:07:56 -0700
On Jul 3, 2009, at 1:20 PM, Greg Guerin wrote:
> [useful comments excised, thank you very much]
I will try lseek and write at the end.
> Exactly what problem is solved by initially writing multiple
> gigabytes of zeros to disk?
As for what I am doing, I have a parallel Scheme system (Wraith
Scheme, see the "Software" page of my web site, URL in the .sig),
and I use mmap to obtain a shared Scheme main memory. By
"parallel", I mean separate Unix processes, not threads. I am
setting things up so that a user who wishes to do so can choose
a memory size large enough to drag the application to a screeching
halt from swapping, and the way to do that seems to be for one
process to create a file of the desired size, then have that
process and all the others mmap it. I am not saying that it is
wise to choose such a large Scheme main memory, but some users
may want to do it.
If there is a better way, I would love to hear about it; I am
by no means an mmap wizard.
-- Jay Reynolds Freeman
---------------------
email@hidden
http://web.mac.com/jay_reynolds_freeman (personal web site)
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden