Re: java.lang.outofmemory
Re: java.lang.outofmemory
- Subject: Re: java.lang.outofmemory
- From: Mike Schrag <email@hidden>
- Date: Wed, 8 Mar 2006 00:11:56 -0500
This topic came up in January too -- Hopefully nobody minds me
quoting myself :) This is a description of the current behavior in
5.3 that one might be able to gather if one were to, say, decompile
and review the entire process -- not that i would ever do this or
condone it, of course -- but if you DID, you would probably gather
exactly this info, which is pretty well documented (and seems to
behave to-spec) in the 5.2 release notes:
"As of 5.2, the way it works is: The snapshots in EODatabase have a
reference count. Each editing context that fetches an EO increments
the reference count. The EC holds onto that EO via a WeakReference.
When the WeakReference is reclaimed, the snapshot reference count can
decrease (note CAN, not IMMEDIATELY WILL -- the editing context keeps
reference queue which is only processed periodically). When the
count gets to zero, the database forgets the snapshot. If you have
entity caching enabled, then EODatabase ignore reference count (or
keeps it at "1" as a minimum) and it will not go away in a read-only
scenario. If you modify any entity of that type and saveChanges in
your EditingContext, a cached entity's cache will be entirely
flushed. (NB: Keep this in mind, because if you are caching a large
amount of data that is writable, it will NOT be very smart about
updating that cache -- It's blown away with every update and then it
immediately reloads the entire set of objects for that entity at the
next access)
If you have retainsAllRegisteredObjects enabled on your editing
context, it will NOT use WeakReferences. Under this circumstance,
the EO reference count is only decreased when 1) you dispose the
editingcontext or 2) you forget or invalidate the object.
When you modify an object in an editing context, the editingcontext
keeps a strong reference to the objects until you saveChanges (or
revert, reset, etc), at which point the strong references are cleared
and the only remaining reference is the weakreference like before.
If you have an undo manager enabled, it will keep a strong reference
to the affected EO's as long as the undo is around.
I do wonder if EC's should be using SoftReferences instead of
WeakReferences ... Would seem to be more friendly to the users of
those EO's.
If you are using WO pre 5.2, then none of the WeakReference stuff
applies, and everything is purely done with snapshot reference
counting -- it should behave like retainsAllRegisteredObjects = true
in 5.2."
thought they should be. Disposing of the editing context fixed /
appeared to fix this problem.
I ran into this same thing ... I'm guessing I must have been using
pre-5.2, because when this topic came up prior to January again, I
was going to post about this, but thought "I should write a test case
and verify it" and I could never get the behavior to happen that I
saw re: holding onto EO's. I had a test case trying to do all sorts
of wacky things with updating, inserting, etc, and they really did
free up like the docs said. Go figure.
easier to manage than setting the undo manager to null and
remembering that I need it to process deletions.
non-null undo manager for deletions is another one I could have sworn
I needed, but I tried that one too and have yet to actually require
it. I didn't test that one nearly as extensively, though, so maybe
there are certain border cases that do? If anyone knows, I'd be
curious to find out.
I make it a practice to call dispose() on an EC if I know that it
will not be used again. I have not measured how much practical
benefit this has over allowing the finalizer to call dispose(), but
it is an easy practice to follow and causes no harm in any event.
The biggest thing is how long it takes the GC to get to your object.
If you call dispose(), you won't hurt anything, and you immediately
decrement the snapshot reference counts. If your EC is falling out
of scope or you set it to null, then I guess it's probably a tossup
as to who gets to it first -- GC finalizing your EC or you manually
calling dispose(). I usually do it when I know I have a long
process. Again, you can only help. However (referring back to a
previous message in this thread), calling System.gc() is usually a
bad idea, and all you're doing is requesting a GC, not ordering one.
As for invalidating objects, this is something to be avoided. I
admit that I have been driven to desperation a couple of times and
used this, but only with regrets and reservations.
Yeah -- I totally agree too .. The couple of times I resorted to
this, I regretted it later, because it ended up screwing me. The
biggest thing is you toss snapshots that people might be in the
middle of editing, and that's going to cause really funky problems.
It's a really heavy-handed way to go.
So re: the original problem -- It's certainly possible that you
really do just require more RAM. At a certain point, it's not
actually a memory leak, you just have higher memory requirements than
your VM is giving you. You can only REALLY figure that out with a
profiler (I've mentioned here before, I use jprofiler, which has WO
support in it, and works really well -- they also have a new
universal binary that is in beta). UndoManager has definitely messed
with me in the past ... If you're doing big import stuff, at least
lower the depth on it, but ideally for inserts turn it off (set it to
null). 5.2 defaults it to a depth of 10 (vs infinite in previous
versions -- eek), but if you're doing huge transactions, you don't
want it. If you're still having problems, I would grab the time-
limited demo of jprofiler and give your app a run ... It may be very
revealing. JProfiler has saved me a couple of times.
ms
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden