On Wed, 22 Sep 2004 22:49:35 +0200, Philippe Wicker <email@hidden>
On a PB 550 MHz, a context switch from one process to another
and back to the first process takes less than 50 micro seconds.
First off, while threading your code can't make the drive spin any
(you'll get the data when you get it) it can free up a lot of CPU time
your app (and/or system) to be doing other things rather than just
for the drive media to spin. So your app may appear faster (or just
responsive) just because it's not forcing your users to wait.
On a Dual 1GHz G4 the switching latency was about 40 mSecs. It's about
mSecs on a G5.
Do you mean 40 milli seconds or 40 micro seconds? 40 milli seconds is
an enormous time (in the computer world at least). When I did the
measure of a context switch time between POSIX threads I found a time
less than 50 micro seconds on my PB550.
In my bench, the files were not read one after the other, but in an
interleaved manner. That is I was reading a 64K chunk on the first
file, then a 64K chunk on the second file, etc and iterating again on
all files reading next 64K chunks (it was done this way because it
or less emulates the way files are actually read in the real
application). Because I was switching between files for each chunk I
was assuming that the disk head would move a lot. Hence the idea to
issue concurrent read disk commands, hoping that this could allow the
system (the driver or the device firmware) to reschedule these
in order to minimize head movement. It appears that it doesn't work
Yeah, this is true. You should have a conversation with our file I/O
about optimizing thru-put in this situation.
I'd be very happy to have a conversation with them. What is the best
list for this? Any other way?
Schizophrenic Optimization Scientist
Apple Developer Technical Support (DTS)
Do not post admin requests to the list. They will be ignored.
Mt-smp mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden