Re: mach_absolute_time() vs. sleep()
site_archiver@lists.apple.com Delivered-To: Darwin-dev@lists.apple.com uint64_t s = mach_absolute_time(); sleep(6); double duration = mach_elapsed_time(s, mach_absolute_time()); Thanks, Kris On Apr 29, 2008, at 8:10 PM, Alison Cassidy wrote: Hi there, -- Allie On Apr 29, 2008, at 6:01 PM, Kristopher Matthews wrote: double mach_elapsed_time(double start, double endTime) { uint64_t diff = endTime - start; static double conversion = 0.0; if (conversion == 0.0) { mach_timebase_info_data_t info; kern_return_t err = mach_timebase_info(&info); if (err == 0) conversion = 1e-9 * (double) info.numer / (double) info.denom; } return conversion * (double) diff; } uint64_t s = mach_absolute_time(); NSLog(@"test"); double duration = mach_elapsed_time(s, mach_absolute_time()); uint64_t s = mach_absolute_time(); sleep(6); double duration = mach_elapsed_time(s, mach_absolute_time()); Regards, Kris _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-dev mailing list (Darwin-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-dev/cooties%40mac.com _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-dev mailing list (Darwin-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-dev/site_archiver%40lists.appl... This email sent to site_archiver@lists.apple.com Yeah. My curiosity is that by NSLog output, it is sleeping approximately the time I specify - so This does take about sex seconds (and I can see that from the NSLog output as well, having a statement before and after). My issue is that 'duration' here is far less than six seconds - less than half a second, even though the code definitely took six seconds to execute. Perhaps there is some other subtle error with my code. I'll keep digging. I've had exactly this experience in the past, too while working on a streaming server. The sleep() command regularly gets interrupted early and cannot be relied upon for accurate delays. I used mach_absolute_time() and mach_wait_until() with some calculations to convert ticks into microseconds to get accurate delays (without thread suspension). The man pages give a hint as to why it may be happening; "The sleep() function suspends execution of the calling thread until either seconds seconds have elapsed or a signal is delivered to the thread [...] If the sleep() function returns because the requested time has elapsed, the value returned will be zero. If the sleep() function returns due to the delivery of a signal, the value returned will be the unslept amount (the requested time minus the time actually slept) in seconds." I'm having some strange trouble with these two calls. Example code follows. At this point, "duration" is a reasonable value in seconds. (About 0.005 IIRC.) This code also works for measuring another block of code I have that write several mbs to disk - the time it reports is in line with the difference between NSLog statements. But this: Produces completely unrealistic results - this specific example comes in at 0.387 seconds. Any thoughts? (I know, I know. this is a BS test case. I just happened upon it and I'm curious why this happens. I have no other problem with timing in this manner.) This email sent to cooties@mac.com smime.p7s
participants (1)
-
Kristopher Matthews