Kernel Panic, Backtrace, GDB, and Optimizations
Kernel Panic, Backtrace, GDB, and Optimizations
- Subject: Kernel Panic, Backtrace, GDB, and Optimizations
- From: Frank Thomas <email@hidden>
- Date: Wed, 21 Sep 2005 16:08:33 -0400
I have been doing some debug work on an occasional kernel panic we have been having with our kernel extension installed (IP Filter). Some of the back traces we have gotten have made perfect sense. You can see the backtrace running back through the code in the proper execution order. A few have had backtraces that just can't happen in the code. For instance a function that would never be called from another function. The ones I have had the best luck backtracing with GDB are when the kernel extension has been compiled with compiler optimizations for "Fastest Smallest".
When compiled with "Faster" optimization the backtrace didn't at all make sense. Maybe something with debug symbols and opts are getting screwed up and pointing me at the wrong point of code? I'm always generating a symbol file from my kext with the base address reported for the kext in the kernel panic and loading it before I do the backtrace. So I am pretty sure it isn't a case of looking at a bad symbol file.
I say all that to ask this. What has expereince been with kernel panics and compiler optimizations? Would it better for me to run no opts during a process like this? Are thier optimizations that should just be totally avoided when doing a kernel extension? I appreciate any feedback you may have.
Sincerely
Frank Thomas
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden