On Mar 18, 2014, at 08:43 , Dallman, John <email@hidden> wrote:
The way conditional breakpoints work is that the breakpoint instruction is put into the code, and the app starts. Each time the breakpoint is hit, it is looked up, found to be a conditional breakpoint, _expression_ is evaluated, and found to be false, the count is incremented, the replaced instruction is run, and then the app is resumed.
On Mar 18, 2014, at 08:52 , Stefan Haller < email@hidden> wrote:
Whenever the debugger hits the breakpoint, it needs to stop, evaluate the _expression_, see that it's false, and continue. This works well if the breakpoint isn't hit very often, but if it is in a very tight loop, it's going to be super slow.
On Mar 18, 2014, at 09:27 , Jens Alfke < email@hidden> wrote: And in general a breakpoint instruction generates a type of CPU trap/exception — the same category as a segfault or divide-by-zero. These are very expensive at the CPU-cycles level, requiring a context switch and saving CPU state to RAM. At that point the debugger has to read the CPU state, figure out what breakpoint the CPU corresponds to, and so forth. Finally on the way back to your code, there’s another context switch.
This may all be true, but I don’t think it addresses Roland’s complaint. He’s saying that this procedure is slower than the replaced instruction, not even by a factor of (say) 10,000, but by a factor of something like a *million*. Even if there are several context switches involved, are we saying that a context switch takes 200,000 or so CPU cycles? Multitasking wouldn’t be feasible if it were that slow.
If the irreducible essence of testing a condition really is this much slower, then obviously there’s no solution. However, at the level of information-deficit we’re discussing this, it’s hard to believe that the essence isn’t several orders of magnitude less slow.
My guess is that the debugger is doing all of the work related to evaluating the condition (including possibly reading and decoding debug information from symbol files) every time the breakpoint location is hit, as if it were executing a ‘expr’ command each time. If so, this seems likely to be improvable.
All three of the above responders mentioned the “obvious” work around of putting an if-test in the code and a non-conditional breakpoint inside the if-block. If there’s no direct optimization that will reduce the 1,000,000x factor to something more reasonable, then surely the debugger is capable of inserting such an if-test in the code itself, isn’t it? |