RE: Custom data formatter for pthread_mutex_t?
RE: Custom data formatter for pthread_mutex_t?
- Subject: RE: Custom data formatter for pthread_mutex_t?
- From: Cem Karan <email@hidden>
- Date: Tue, 21 Aug 2007 07:29:21 -0400
On Mon, 20 Aug 2007 06:03:26 -0700, Steve Checkoway wrote:
On Aug 20, 2007, at 5:14 AM, Karan, Cem (Civ, ARL/CISD) wrote:
Hmmm.... OK, I have no experience with custom data formatters, so
please
forgive my ignorance. Is it possible to write a custom formatter
that
can call pthread_mutex_trylock()from within XCode/GDB's context on a
lock in the application's context? If so, then I can let my code run
until it hits a breakpoint, know that the code is stopped, and use
trylock() to test the state of the lock. This would also negate the
need for custom macros and all the other usual tricks, and it
would be
portable to any pthreads code that needs to be debugged (the data
formatter can be with Xcode, not the project code)
As I mentioned before, calling pthread_mutex_trylock() isn't a good
solution because if you have a recursive mutex and it gets called on
the same thread, it could report being unlocked when it is really
locked.
As for custom data formatters, I posted to this list the layout of
the pthread_mutex_t data structure so it is very easy to write a
custom data formatter. For PowerPC, at least, gcc lays out bit fields
sequentially so that the u_int32_t protocol:2, type:2, rfu:12,
lock_count:16; field is layed out with the lock count being bytes 18
and 19 of the __opaque field of the pthread_mutex_t.
I don't know why, but when I view the mutex in Xcode's debugger, the
leading underscores are stripped. That said, one simple custom data
formatter is
locked = {(bool)$VAR.opaque[18]||$VAR.opaque[19]}
It's possible (probable?) that on x86 the bit fields are laid out in
the reverse order so it's likely that you'd want
locked = {(bool)$VAR.opaque[16]||$VAR.opaque[17]}
Of course this isn't portable, but since you're just using it for
debugging purposes, it doesn't seem like that'd be much of a problem.
You can get more fancy with something like
inited = {$VAR.sig==1297437784}; locked = {(bool)
$VAR.sig==1297437784&&($VAR.opaque[18]||$VAR.opaque[19])}
where 1297437784 is 'MUTX'. On i386, it might be 1481921869 for
'XTUM', I'm not sure.
--
Steve Checkoway
OK, NOW I get to see this message! :) I'm subscribed to the list in
digest mode, so I tend not to see messages for a long time, unless
you CC me directly. Now I understand what Alex was telling me early
much better.
On Mon, 20 Aug 2007 17:52:29 +0200, Jonas Maebe wrote:
On 20 Aug 2007, at 17:30, Cem Karan wrote:
As it stands, I have used a similar method to write when I locked a
lock, and when I released it (all wrapped in macros). I found that
the running behavior between the debug version and the release
version never quite matched. By using long running statistical
testing (fuzzing) in my unit tests, I managed to get enough
information that I squashed most (all?) of my bugs, but what I
really want is a way of probing the state of locks in a way
guaranteed to not modify the running behavior of the threads.
In this sense, computers are like quantum states (except with
external probes on the busses or so): observing means modifying. But
it doesn't really matter that much, see below.
The only way for that to happen is for GDB to pause all threads at
the same time, and then to probe the locks individually, which,
when you really get down and dirty, is impossible.
Not only that, it would still not particularly help you. The
scheduler could still reschedule your threads in any order it wants.
How your threads are scheduled can change with any release of the OS,
with a different number of cpus, with a different input, or with a
change of the load of the machine (which subsequently makes it
dependent on the time of the day, e.g. cron, some dns lookup which
activates lookupd etc).
The "freezing the state of the program in release and looking and
then continuing like nothing happens" is merely one of the cases you
can test (and may even differ between different tries), and exactly
the same conditions will probably happen during one of the fuzzing
tests (which you need to run anyway) if you run enough of then. There
is no single "release behaviour" you can test.
Agreed completely. However, now that I understand what Steve and
Alex have been driving at, that will do me nicely. My fuzzer is
working fairly well, and with the data formatter I can see if I am
locking before every access to my variables (which is my main concern).
I have a few other questions to ask as well, but I suspect that I'm
crossing into darwin-dev mailing list territory now; if I am, tell me
to shift the thread over there.
Since threads can operate in any order, and since fuzzing is at best
a statistical process that depends on different runs being truly
different (the thread run order needs to change each time the program
runs), does anyone know of a method of increasing the scheduling
jitter in pthreads? I don't mind if it is a non-portable method,
this will only be for overnight debugging runs, to get an idea of if/
when things misbehave.
Thanks,
Cem Karan
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden