Re: Reload VM hint?
site_archiver@lists.apple.com Delivered-To: darwin-kernel@lists.apple.com No, there isn't. The system has no way of telling the difference between pages that were evicted but still relevant and pages that were evicted and not relevant; it has to wait for other applications to need them. Brian's somewhat terse observation might bear a little expansion, though I think he may be on the right path. You say your application is a backup utility, and that it uses hundreds of megabytes of RAM. What do you mean by "uses"? Do you make large allocations inside your application's address space? Why? If you are buffering for a device, you can probably get by with much less buffer than you think; certainly not hundreds of megabytes. As a backup utility, you are probably reading a lot of files; as Brian notes if you apply the F_NOCACHE fcntl to these files immediately after opening them, you will avoid evicting other applications' pages in favour of file cache pages. HTH; feel free to ask more questions. Your users, incidentally, should buy more RAM. 8) -- James Bucanek _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-kernel mailing list (Darwin-kernel@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-kernel/site_archiver%40lists.a... Michael Smith <mailto:drivers@mu.org> wrote (Wednesday, May 30, 2007 12:36 PM -0700): On May 30, 2007, at 7:47 AM, James Bucanek wrote: Is there anyway for my application to send a "hint" or command to the VM manager that will cause it to reload the recently paged out memory pages? I have a supervisory process (a scheduler daemon) that can do this after the process terminates. True, but I hoping that there was still some way of restoring the most recently pushed out pages using the logic that the cost of re-reading unneeded pages now would outweigh the cost of stalling to re-read needed pages later. Hash and lookup tables, mostly. The application is QRecall (<http://www.qrecall.com/>). It breaks every file down into individual data blocks, then builds a massive database of those blocks. Every block of data is first compared against the set of blocks that have already been captured so that no duplicate data is ever added to the backup data set. Works pretty well, except (apparently) for the huge amount of RAM and CPU resources it needs. No, all of my file buffers are pretty small (in the 1MB range). And that's mostly so I can read in a chunk of data and then get several threads working on the problem. I hadn't thought of that, and that's an excellent suggestion (thanks also extended to Brian and Matt). The source file data is only read once, so at least I can eliminate that from being cashed. Come to think of it, that might improve my overall performance since the reading the source file blocks also competes with the caching of my database file blocks. I agree. Maybe I should create a bundle deal where every copy of the application comes with a RAM upgrade? ;) This email sent to site_archiver@lists.apple.com
participants (1)
-
James Bucanek