Re: Performance problems under 10.2
First of all, I don't think the fact that it runs at nice 20 is a problem. I generally run seti reniced to 20, but even without renicing the effect is the same. OK, first I searched for this...... " (This is the last message displayed while the client works. To see progress messages between now and completion of processing, operate the client with the -verbose option. You can also see the progress of the client by looking at the 'prog=' line in the state.sah file. " That's very near the end of the ktrace file. Everything after that is just general operation, not startup or anything. I ran the program for about 10 seconds or so, and only 2% of the ktrace file is after the startup message. This came from "kdump -t cnisw -R" This is a representative sample from that area..... 5132 setiathome 0.000121 RET read 2271/0x8df 5132 setiathome 0.000052 CALL close(0x4) 5132 setiathome 0.000056 RET close 0 5132 setiathome 1.813266 CALL open(0xbffff768,0,0x1b6) 5132 setiathome 0.000198 NAMI "outfile.sah" 5132 setiathome 0.000094 RET open 4 5132 setiathome 0.000082 CALL fstat(0x4,0xbffff590) 5132 setiathome 0.000058 RET fstat 0 5132 setiathome 0.000063 CALL fstat(0x4,0xbffff6a0) 5132 setiathome 0.000051 RET fstat 0 5132 setiathome 0.000050 CALL lseek(0x4,0,0,0x1) 5132 setiathome 0.000049 RET lseek 0 5132 setiathome 0.000050 CALL lseek(0x4,0,0,0) 5132 setiathome 0.000103 RET lseek 0 5132 setiathome 0.000060 CALL read(0x4,0x78050,0x1000) 5132 setiathome 0.000099 GIO fd 4 read 2271 bytes ..... 5132 setiathome 0.000147 RET read 2271/0x8df 5132 setiathome 0.000056 CALL close(0x4) 5132 setiathome 0.000068 RET close 0 5132 setiathome 0.000062 CALL getrusage(0,0xbffff768) 5132 setiathome 0.000060 RET getrusage 0 5132 setiathome 0.001635 CALL open(0xbffff718,0,0x1b6) 5132 setiathome 0.000079 NAMI "outfile.sah" 5132 setiathome 0.000064 RET open 4 5132 setiathome 0.000057 CALL fstat(0x4,0xbffff540) 5132 setiathome 0.000052 RET fstat 0 5132 setiathome 0.002187 CALL fstat(0x4,0xbffff650) 5132 setiathome 0.000099 RET fstat 0 5132 setiathome 0.000052 CALL lseek(0x4,0,0,0x1) 5132 setiathome 0.000048 RET lseek 0 5132 setiathome 0.000051 CALL lseek(0x4,0,0,0) 5132 setiathome 0.000047 RET lseek 0 5132 setiathome 0.000050 CALL read(0x4,0x78050,0x1000) 5132 setiathome 0.000078 GIO fd 4 read 2271 bytes ...... 5132 setiathome 0.000092 RET read 2271/0x8df 5132 setiathome 0.000053 CALL close(0x4) 5132 setiathome 0.000058 RET close 0 5132 setiathome 1.906561 CALL open(0xbffff768,0,0x1b6) 5132 setiathome 0.000205 NAMI "outfile.sah" 5132 setiathome 0.000092 RET open 4 5132 setiathome 0.000118 CALL fstat(0x4,0xbffff590) 5132 setiathome 0.000059 RET fstat 0 5132 setiathome 0.000063 CALL fstat(0x4,0xbffff6a0) 5132 setiathome 0.000053 RET fstat 0 5132 setiathome 0.000052 CALL lseek(0x4,0,0,0x1) 5132 setiathome 0.000050 RET lseek 0 5132 setiathome 0.000052 CALL lseek(0x4,0,0,0) 5132 setiathome 0.000049 RET lseek 0 5132 setiathome 0.000053 CALL read(0x4,0x78050,0x1000) 5132 setiathome 0.000099 GIO fd 4 read 2271 bytes There are a few things to be noticed here. First of all, it's performing lots of superfluous opening and closing of "outfile.sah". But by lots, that's only one a second or so, completely insignificant. In fact, there are large time periods (up to two seconds) where the program isn't making any syscalls. Consequently it is pretty clear that syscalls aren't the cause of this. While this was running it was still using 35% or more systime, though it was tracing. One thing that could be a problem is this..... PID COMMAND %CPU TIME FAULTS ... BSDSYSCAL ... CSWITCH 5169 top 0.6% 0:00.55 2075 22527 419 5168 setiathome 56.4% 1:01.51 13954 17563 29892 OK, I've dropped fields that I don't think are important (generally very low numbers). This is setiathome after running for only a few seconds, maybe 10 or 20. Notice that it's had 13,000 faults and almost 30,000 context switches. This is running at nice = 20. After four minutes of CPU time (maybe 5 minutes of real time) it looks like this... 5168 setiathome 71.0% 4:04.97 45376 .... 68898 .... 113043 Once again with the same fields dropped. Notice that it's had 45,000 faults, 70,000 syscalls, and 113,000 CSwitches. Now one last bit of information. Here's the top header, not in event mode. Processes: 51 total, 4 running, 47 sleeping... 145 threads 12:02:25 Load Avg: 1.65, 1.44, 1.36 CPU usage: 75.5% user, 24.5% sys, 0.0% idle SharedLibs: num = 42, resident = 16.2M code, 1.40M data, 5.38M LinkEdit MemRegions: num = 3759, resident = 84.4M + 5.05M private, 62.0M shared PhysMem: 41.5M wired, 108M active, 205M inactive, 354M used, 93.5M free VM: 1.37G + 34.0M 15734(0) pageins, 14783(0) pageouts What does this tell us? Well first of all, there's no shortage of ram, so there's no reason why seti should be having many faults. The page faults are probably the cause of the context switches, and together they're probably the cause of much of the system time. Just a guess. The other interesting thing is that there are 70,000 syscalls in the listing, but not nearly that many in the ktrace. ktrace didn't pick up more than say ten syscalls per second, meaning that after 5 minutes there should have been no more than 3,000 syscalls, plus whatever happened while starting the program. So... 70,000 - 18,000 - 3,000 = 49,000 syscalls unaccounted for. So, it looks to me like the performance problem comes from two places. The extra syscalls, and the completely unnecessary page faults. I might even go as far as to say that fixing both of these might give all programs a significant boost across the board. Tyler Ward tjw19@columbia.edu _______________________________________________ darwin-kernel mailing list | darwin-kernel@lists.apple.com Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/darwin-kernel Do not post admin requests to the list. They will be ignored.
participants (1)
-
Tyler Ward