Re: Large Array Clogging
Re: Large Array Clogging
- Subject: Re: Large Array Clogging
- From: Drew Thoeni <email@hidden>
- Date: Sun, 6 Jan 2008 21:11:31 -0500
Memory does seem to be the problem. After investigating with verbose
GC, reconstructing the ec after each save, and expanding the memory
from 512 to 1024 MB, the app ran further into the parents (to about
#600). However, there are 18,000 of them. It seems as if there's some
memory that is not being cleaned out.
In the start of the running, GC happens every six or so parents,
parents are being processed at about 6 per second, and GC reads
(typically):
[GC 20622K->14140K(260160K), 0.0961812 secs]
Near failure, GC happens multiple time between parents, the rate has
slowed to under one per second and GC reads:
[Full GC 1040512K->1037278K(1040512K), 5.9161045 secs]
[Full GC 1040511K->1039654K(1040512K), 5.8477059 secs]
[Full GC 1040511K->1040016K(1040512K), 6.6661878 secs]
<snip 10 similar>
[Full GC 1040512K->1040511K(1040512K), 5.9030194 secs]
[Full GC 1040511K->1040511K(1040512K), 5.8454243 secs]
[Full GC 1040512K->1040511K(1040512K), 5.8699809 secs]
I understand that EOF is not that efficient, but does that mean the
memory problem can't be fixed?
Drew
On Jan 6, 2008, at 4:56 PM, Chuck Hill wrote:
On Jan 6, 2008, at 11:53 AM, Drew Thoeni wrote:
I have two tables in a parent-child relationship. Some of the
parents have a few children (say a few dozen) and others have many
children (over 100,000). I'm trying to run through the parents and
pre-calculate some statistics for their children (for example,
average, mode, standard deviation, etc.).
I'd be tempted to take a long, hard look at some of the aggregate
functions in Wonder that generate SQL for you. Batch processing is
not EOF's strongest area due to the overhead of object creation and
garbage collection.
All this is working except when I reach a specific parent (#457)
the processing just hangs. I ran this through debug and it stalls
at the creation of the array for the children.
However, the system would have already processed a similar size
array (#30 has 118,000 children) and it seems it's in a loop.
Activity monitor says the java process is taking 100%.
Other background. I only save the ec after every 20 updates to the
parent records (to reduce write time). And I use
ec.invalidateAllObjects() right after saving to clear our memory.
Rather than do that, I'd create a new EC and just not retain _any_
references to the objects previously processed, the previous EC
etc. EOF snapshot counting and Java GC should take care of the rest.
Finally, there is nothing wrong with the data in parent/children of
#457 as I can start the process there (or a few parents ahead of
this) and it runs fine.
This seems like a memory problem, but I don't get an out of memory
exception.
Before OutOfMemory, there is memory starvation. I think that is
what is happening to you. Increase the heap size and see if that
delays the problem to a later parent. You could also launch with a
JVM parameter of -verbose:gc to log when GC happens.
The solution is to clean up your handling of objects and editing
contexts.
Chuck
--
Practical WebObjects - for developers who want to increase their
overall knowledge of WebObjects or who are trying to solve specific
problems.
http://www.global-village.net/products/practical_webobjects
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden