Re: Filling in a struct timespec with the current date?
Re: Filling in a struct timespec with the current date?
- Subject: Re: Filling in a struct timespec with the current date?
- From: Terry Lambert <email@hidden>
- Date: Wed, 29 Jul 2009 11:12:24 -0700
On Jul 29, 2009, at 5:05 AM, "Karan, Cem (Civ, ARL/CISD)" <email@hidden
> wrote:
Terry Lambert wrote on Tuesday, July 28, 2009 2:51 PM
<<SNIP>>
Or if he needs better resolution, mach_absolute_time(), which
is well documented at <http://developer.apple.com>.
<<SNIP>>
OK, the best documentation I've found for this is at
http://developer.apple.com/qa/qa2004/qa1398.html
and I whipped up a quick test program to ensure I knew what I was
doing with it. Now my question concerns accuracy; I know that
mach_absolute_time() resolves to one nanosecond, but what is the
accuracy? Is there a way to tell what the accuracy is from the
mach_timebase_info struct? E.g., the greatest common denominator
between numer and denom is the accuracy of the clock in nanoseconds?
This won't affect my current code, but I'd like to stick it into my
own documentation so whenever I reuse this code I know what I'm
getting in to.
Actually, that's a somewhat interesting question, and its answer is
one reason the RT functions haven't been back-filled.
And honestly the best place to ask the question is probably a site
like "Tom's Hardware". But I'll give a shot at a 50,000 ft view
explanation.
The answer is that you can treat it as monotonically increasing,
fairly high resolution, with variable accuracy under less than 100%
load, moderated by thermal and other power management considerations.
Intel chips in general don't have as accurate real time clocks as even
the older PPC systems, which had specific direct support in non-
switched support chips.
By contrast, the more accurate clocks on Intel are generally
implemented using cycle counters (TSC style), and doing a bunch of
math to account for entry and exits for various C states.
In addition, the actual counters are not technically monotonic; that
is, there is a jitter in accuracy which is relative to the ratio
between two MSRs on the CPU, and that's used to "adjust" the TSC, e.g.
when events like the HPET timer, which is lower resolution, wakes the
CPU out of a power management C state, such as "deep C4".
In addition, modern CPUs support "burst modes", where they will
internally overclock within their thermal envelope, while under load,
and likewise they will internally reduce voltage and retard the clock
under thermal pressure or reduced load.
All of this code is outside the kernel proper, and uses proprietary
information from the CPU vendor, and is intimately tied to the hardware.
Ultimately, the code is very complicated to do what it does, and what
it does as far as timers is model the power state and state transition
latencies with sufficient precision that what you end up with is a
fairly accurate clock at the time you read it. This is of course
minus some accuracy for the cycle count for the instructions for the
read and the math that happens to get it into an absolute time number,
which might be happening at a clock rate other than the rated clock
rate, if the CPU is bursting, thermal throttling, or under less than
full load.
So the short answer after all that is "mostly accurate, but not as
accurate as if there were dedicated high resolution RTC hardware,
accessible on a fast bus, backing it up, independent of the CPU itself".
Most system designers these days claim that the unit of measure of
system performance isn't its CPU clock speed, it's BTU and milliwatt-
hours.
-- Terry
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden