Giving the following piece of code.
typedef struct { int64_t timeValue; uint32_t timeScale; uint32_t flags; } SATime;
SATime SATimeMakeWithTimeInterval(CFTimeInterval timeInterval) { int64_t test = timeInterval * 1e6; int64_t test2 = timeInterval * 1e6; fprintf(stderr, "1. float: %f, integer: %lld, integer2: %lld\n", timeInterval * 1e6, test, test2);
return SATimeMake(timeInterval * 1e6, (uint32_t)1e6); }
Does someone has a rational explanation for the output it produces in one of my project:
1. float: 10000000.000000, integer: 10000000, integer2: 10000000 1. float: -5000000.000000, integer: -5000000, integer2: -5000000 1. float: 0.000000, integer: 0, integer2: 0 1. float: 10000000.000000, integer: 10000000, integer2: 10000000 1. float: 10000000.000000, integer: -9223372036854775808, integer2: 10000000
From time to time, the floating point to integer conversion returns INT64_MIN.
I managed to reliably reproduce this issues in a complex project, but didn't managed to reproduce it in a simple test case.
I don't know if this is relevant, but some of these call (especially the one that fails) are perform asynchronously in a serial dispatch_queue, and this code is in a shared library.
|