Re: Optimizing writes of big files for specific hardware?
Re: Optimizing writes of big files for specific hardware?
- Subject: Re: Optimizing writes of big files for specific hardware?
- From: Greg Guerin <email@hidden>
- Date: Fri, 3 Jul 2009 13:20:37 -0700
Jay Reynolds Freeman wrote:
I have an app whose initialization includes writing a huge file
to disk -- think GigaBytes, or even tens of GigaBytes. I am
doing this in the context of setting up a large area of shared
memory with mmap, so the big write has to happen at
initialization, and it is agonizingly slow.
Simply seeking to a large position in a new or truncated file and
writing will fill all intervening locations with zeros. Unix-type
OSes generally guarantee that intervening space is zeros.
The most optimal thing possible is to never write zeros at all, i.e.
the fastest thing possible is the thing you don't do. The next most
optimal thing is to only write the exact number of zeros needed, and
only when you need them.
I think you're doing this backwards. You should be looking at ways
to eliminate writing zeros, especially tens of gigabytes of them, not
ways of making them faster. That's because no matter what else you
do, you will still be limited by the slowest link in the chain of OS,
HD controller, HD, SATA, memory controller, etc. That could be in
the range of 20-30 MB/sec, or even worse on older machines or USB-
connected HDs. Do the math.
Exactly what problem is solved by initially writing multiple
gigabytes of zeros to disk? Yes, you've zeroed multiple gigabytes of
a shared file on disk, but exactly why is that necessary? What does
it accomplish, specifically, and why is it gigabytes in size?
-- GG
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden