Response inline...
On 19 Aug 2008, at 12:09 AM, Gerriet M. Denkmann wrote:
Some questions about .hotfiles.btree which are not answered by tn1150 (all quotes from there).
1. how to get a .hotfiles.btree? I have several partitions (volumes) "whose size is at least 10GB, and which have journaling enabled."
But: No "metadata zone is established when the volume is mounted" nor is a .hotfiles.btree created, nor does "the clumpSize field of the fork data structure" ever seems to change.
If the volume is not a root volume, it may not proceed to the filesystem calls that actually create the hotfiles btree. See #2 below.
2. The only case where a .hotfiles.btree seems to work is my boot partition. Known bug, feature, or error on my side?
Hotfiles are only supported on the root filesystem. Other filesystems do not get hotfiles support.
3. When do clumpSizes get updated? Seems like immediately when read from disk, but not when read from some file cache. Correct?
Well, the clumpSize referred to in the Tech Note is actually the clump size that gets written to disk, so it's updated as part of the periodic sync that runs every X seconds to push stuff to disk. It uses the data from the cnode to tell it what to write out, however. That data is updated upon every read, regardless of whether the data came from disk or the buffer cache.
4. When does the hotfiles.btree gets updated? Who or what triggers the update? Is possible to tell someone in charge to update the hotfiles.btree right now? By sending some signal to some process?
Files are added and removed to the hotfiles list as vnodes are reclaimed and recycled. When the file goes out of scope, it goes through a bunch of checks for the temperature, who its parent is, the current hotfiles stage, among other things. If it passes all of the checks it is added to the hotfiles list. There isn't a way to trigger update of the hotfiles btree from userland via signals.
5. "The fork's temperature can be computed by dividing its clumpSize by its totalBlocks." This formula does not give results even close to the things I see.
temperature = clumpSize * blockSize (4096) / logicalSize gives more plausible results.
But even so some entries do not make sense.
E.g.:
[ forkType 0 temperature 3458460 fileID 486463 ] = "Users"
[RootFolder nodeName: "Users"]
fileID = 486463 flags: hasThread
Creator: 1919443312 = 'rhap'
HfsType: 1936485995 = 'slnk'
Data Fork:
logicalSize 28 (bytes)
clumpSize 147
totalBlocks 1 (4096 bytes)
With my formula I get a temperature = 21,504 instead of 3458460 in the hotfiles.btree.
Also anything cooler than 10 000 degrees Celsius (or Kelvin?) has temperatures in the hotfiles.btree which are much too low; obviously not updated since ages.
But threshold (minimum temperature): 24
And timeleft = 0.8 days and never changing.
The formula is basically as as described. bytes read / Total blocks used by the fork. The only difference is that instead of using the clump value on-disk, HFS uses the data from the in-core data structures. The data structures could be out of sync from the in memory values, though I'm not sure that that would describe the discrepancies. How are you obtaining the information above?
6. "Files whose temperature is less than this value (threshold) will be moved out of the hot file area." When? If the hot file area gets full?
Hot files can be moved if they were previously quiescent and then activity occurs on that file causing a vnode to be created for it. This means that there is likely some file activity on it. If they are still deemed hot after activity is finished, then it will be re-added to the hot files area. Alternatively, the coldest entry will be evicted once the area becomes full.