Re: Optimizing writes of big files for specific hardware?
Re: Optimizing writes of big files for specific hardware?
- Subject: Re: Optimizing writes of big files for specific hardware?
- From: Michael Ash <email@hidden>
- Date: Sat, 4 Jul 2009 22:47:31 -0400
On Fri, Jul 3, 2009 at 5:07 PM, Jay Reynolds
Freeman<email@hidden> wrote:
> On Jul 3, 2009, at 1:20 PM, Greg Guerin wrote:
>
>> [useful comments excised, thank you very much]
>
> I will try lseek and write at the end.
>
>> Exactly what problem is solved by initially writing multiple
>> gigabytes of zeros to disk?
>
> As for what I am doing, I have a parallel Scheme system (Wraith
> Scheme, see the "Software" page of my web site, URL in the .sig),
> and I use mmap to obtain a shared Scheme main memory. By
> "parallel", I mean separate Unix processes, not threads. I am
> setting things up so that a user who wishes to do so can choose
> a memory size large enough to drag the application to a screeching
> halt from swapping, and the way to do that seems to be for one
> process to create a file of the desired size, then have that
> process and all the others mmap it. I am not saying that it is
> wise to choose such a large Scheme main memory, but some users
> may want to do it.
>
> If there is a better way, I would love to hear about it; I am
> by no means an mmap wizard.
You can do shared memory without requiring the entire shared memory
space to be backed by a file on your hard drive. There is a POSIX API
for shared memory which you can get to by googling "POSIX shared
memory" or looking up the man page for the shm_open function. You can
also do shared memory using mach calls, but I don't actually know what
those calls are.
Mike
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden