RE: Filling in a struct timespec with the current date?
site_archiver@lists.apple.com Delivered-To: darwin-dev@lists.apple.com Thread-index: AcoQeDVljQtCD2KDRgqkW2TaaN2/hQAFSBLQ Thread-topic: Filling in a struct timespec with the current date? Terry Lambert wrote on Wednesday, July 29, 2009 2:12 PM <<SNIP>>
Is there a way to tell what the accuracy is from the mach_timebase_info struct? E.g., the greatest common denominator between numer and denom is the accuracy of the clock in nanoseconds?
Actually, that's a somewhat interesting question, and its answer is one reason the RT functions haven't been back-filled.
And honestly the best place to ask the question is probably a site like "Tom's Hardware". But I'll give a shot at a 50,000 ft view explanation.
The answer is that you can treat it as monotonically increasing, fairly high resolution, with variable accuracy under less than 100% load, moderated by thermal and other power management considerations.
Intel chips in general don't have as accurate real time clocks as even the older PPC systems, which had specific direct support in non- switched support chips.
By contrast, the more accurate clocks on Intel are generally implemented using cycle counters (TSC style), and doing a bunch of math to account for entry and exits for various C states.
In addition, the actual counters are not technically monotonic; that is, there is a jitter in accuracy which is relative to the ratio between two MSRs on the CPU, and that's used to "adjust" the TSC, e.g. when events like the HPET timer, which is lower resolution, wakes the CPU out of a power management C state, such as "deep C4".
In addition, modern CPUs support "burst modes", where they will internally overclock within their thermal envelope, while under load, and likewise they will internally reduce voltage and retard the clock under thermal pressure or reduced load.
All of this code is outside the kernel proper, and uses proprietary information from the CPU vendor, and is intimately tied to the hardware.
Ultimately, the code is very complicated to do what it does, and what it does as far as timers is model the power state and state transition latencies with sufficient precision that what you end up with is a fairly accurate clock at the time you read it. This is of course minus some accuracy for the cycle count for the instructions for the read and the math that happens to get it into an absolute time number, which might be happening at a clock rate other than the rated clock rate, if the CPU is bursting, thermal throttling, or under less than full load.
So the short answer after all that is "mostly accurate, but not as accurate as if there were dedicated high resolution RTC hardware, accessible on a fast bus, backing it up, independent of the CPU itself".
Most system designers these days claim that the unit of measure of system performance isn't its CPU clock speed, it's BTU and milliwatt- hours.
Thank you for the explanation, I didn't realize it was that complicated on x86 chips. OK, in that case, I'll use it, but with the caveat that although it is probably the best clock I have available, its accuracy can be quite off (but no more off than any other clock I have access to) Thanks, Cem Karan _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-dev mailing list (Darwin-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-dev/site_archiver%40lists.appl... This email sent to site_archiver@lists.apple.com
participants (1)
-
Karan, Cem (Civ, ARL/CISD)