Re: mach_absolute_time() vs. sleep()
Re: mach_absolute_time() vs. sleep()
- Subject: Re: mach_absolute_time() vs. sleep()
- From: Kristopher Matthews <email@hidden>
- Date: Tue, 29 Apr 2008 20:53:51 -0500
Yeah. My curiosity is that by NSLog output, it is sleeping
approximately the time I specify - so
uint64_t s = mach_absolute_time();
sleep(6);
double duration = mach_elapsed_time(s, mach_absolute_time());
This does take about sex seconds (and I can see that from the NSLog
output as well, having a statement before and after). My issue is that
'duration' here is far less than six seconds - less than half a
second, even though the code definitely took six seconds to execute.
Perhaps there is some other subtle error with my code. I'll keep
digging.
Thanks,
Kris
On Apr 29, 2008, at 8:10 PM, Alison Cassidy wrote:
Hi there,
I've had exactly this experience in the past, too while working on
a streaming server. The sleep() command regularly gets interrupted
early and cannot be relied upon for accurate delays. I used
mach_absolute_time() and mach_wait_until() with some calculations to
convert ticks into microseconds to get accurate delays (without
thread suspension). The man pages give a hint as to why it may be
happening;
"The sleep() function suspends execution of the calling thread
until either seconds seconds have elapsed or a signal is delivered
to the thread [...] If the sleep() function returns because the
requested time has elapsed, the value returned will be zero. If the
sleep() function returns due to the delivery of a signal, the value
returned will be the unslept amount (the requested time minus the
time actually slept) in seconds."
-- Allie
On Apr 29, 2008, at 6:01 PM, Kristopher Matthews wrote:
I'm having some strange trouble with these two calls. Example code
follows.
double mach_elapsed_time(double start, double endTime)
{
uint64_t diff = endTime - start;
static double conversion = 0.0;
if (conversion == 0.0) {
mach_timebase_info_data_t info;
kern_return_t err = mach_timebase_info(&info);
if (err == 0)
conversion = 1e-9 * (double) info.numer / (double) info.denom;
}
return conversion * (double) diff;
}
uint64_t s = mach_absolute_time();
NSLog(@"test");
double duration = mach_elapsed_time(s, mach_absolute_time());
At this point, "duration" is a reasonable value in seconds. (About
0.005 IIRC.) This code also works for measuring another block of
code I have that write several mbs to disk - the time it reports is
in line with the difference between NSLog statements. But this:
uint64_t s = mach_absolute_time();
sleep(6);
double duration = mach_elapsed_time(s, mach_absolute_time());
Produces completely unrealistic results - this specific example
comes in at 0.387 seconds.
Any thoughts? (I know, I know. this is a BS test case. I just
happened upon it and I'm curious why this happens. I have no other
problem with timing in this manner.)
Regards,
Kris
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden