Mailing Lists: Apple Mailing Lists
Image of Mac OS face in stamp
Re: Out of Memory? Invalidate them objects!(?)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Out of Memory? Invalidate them objects!(?)

Hi Jason,

While I don't know the specifics of your design, it seems like overkill to throw both a separate peer and a nested editing context into the mix. It depends partially on the relationship of this batch loading operation to the rest of the session-based edits that your user may do, but it's quite legitimate to use the Session's default editing context for these batch operations.

If you do decide that your batch loading function should be kept in a separate "work space" from the default editing context--for example, if you're concerned that other unsaved changes may need to linger in the default editing context--then go ahead and create a peer editing context for the batch operation.

However, I can't think of any reason why you would want to create the nested child editing context as well. It's just going to add further overhead associated with another editing context, and doesn't seem to serve any functional purpose--unless I'm missing something in your design.

By default, EOF is going to create an individual database operation to save each inserted EO in your editing context (actually it's two operations: one to select the primary key, and another insert the row into the db). So deferring saving your editing context until you've created a batch of EOs makes sense, but this is a limitation to consider. If you want to ensure your code saves the EC after 25 records have been created, simply keep a counter and save your editing context at the appropriate time. There's no benefit in having a nested EC involved.

In terms of memory management, invalidating the root object store's snapshots is pretty heavy-handed. It's really only necessary when a large change occurs underneath the EOF layer that has wide-reaching effects on your object graph. Otherwise, you're basically just killing the whole cache for your application and requiring it to refetch everything. Clearly, this will have a detrimental effect on performance of your whole app.

So, as a start to addressing your memory issues, I would suggest that you simplifying your editing context usage to either use the default EC or to only create a single peer EC for the bulk operations.

Next, I would take a look at how big your JVM max heap size has been set. The default is 64mb, I think. For modern server machines you've probably got a lot more RAM available, and you should configure your application to take advantage of that. There is a JVM parameter, -Xmx, that will let you set a larger amount of memory available to your app.

Lastly, the editing context's undo manager can add quite a bit of overhead in terms of memory because it needs to keep a record of previous changes to your EO. You may want to consider reducing the level of undo manager's number of undos or setting your editing context's undo manager to null by calling myEC.setUndoManager(null).

There's more information about the topic of memory management and EOF on the WODev Wiki: Main?wikiPageAuthor=email@hidden&wikiPage=memoryManagementOfEOs

I hope that helps,


Colin Clark
Dynamic Web and Database Development Lead,
Resource Centre for Academic Technology,
University of Toronto

On Wednesday, September 29, 2004, at 12:12  AM, Jason McInnes wrote:

My app has a batch loading function where a user can
upload a tab-delimited file which is then parsed and
EOs created and saved from the contents.

When I originally wrote the batch loading function, I
was passing the user's
session().defaultEditingContext() to do the
inserting/fetching/saving. That worked fine. I was
able to parse a file with 40K+ record and create all
the appropriate EOs no problem.

Then I thought it might be a better idea to have the
batch loader object use its own Editing Context and
also use a nested editing context where each
inidividual record would be parsed and an EO created
in the child context that would then save to the
parent EC (to get the validations), and then every 25
lines the parent EC would itself save to the
underlying objectstore and on into the database.

In my current configuration, for each record a new
child EC is created while the parent EC exists for the
duration of the load. The Child EC is locked as soon
as it is created and when I'm done with the line I
childEC = null;

So this goes on and on, but the application slows down
and eventually it crashes with

That was a head-scratcher. Eventually after
researching the lists, I tried something that had
worked for someone else. After the save of the
parentEC, I call
parentEC.rootObjectStore().invalidateAllObjects(). My
assumption, based on my limited knowledge, was that
during the batch load, all the EOs I was creating were
hanging around in the underlying ObjectStore rather
than being garbage collected and that eventually the
memory just filled up. This solution did work - no out
of memory error.

So my question is, is the defaultEditingContext doing
something to prevent this memory hogging that I'm just
not aware of? Because, again, I did not have to
invalidateAllObjects in order to get everything to
work when all I did was use the

===== Jason McInnes 2 Degrees Cell: 206.849.3680 Email: email@hidden

Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Webobjects-dev mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
 >Out of Memory? Invalidate them objects!(?) (From: Jason McInnes <email@hidden>)

Visit the Apple Store online or at retail locations.

Contact Apple | Terms of Use | Privacy Policy

Copyright © 2011 Apple Inc. All rights reserved.