Re: optimization/indexing
Re: optimization/indexing
- Subject: Re: optimization/indexing
- From: Jeff Schmitz <email@hidden>
- Date: Thu, 18 Dec 2008 22:36:49 -0600
I can agree that it's more a "collection" than a "relation", but
what I don't understand is, how is a "collection" implemented
differently than a "relationship" in EOModeler? Somewhere along the
line I got the idea that using the "MutableArray" prototype was bad
juju, so if not that, and not a blob and not a relationship, then what?
Thanks,
Jeff
On Dec 18, 2008, at 10:29 PM, Chuck Hill wrote:
On Dec 18, 2008, at 7:41 PM, Jeff Schmitz wrote:
By consider a different design, do you mean something like the
below (from the wiki)? Coming form an OO world, I perhaps took the
paradigm too far and chopped up my data into too many tables?
I'd consider going a step back before that and re-considering your
OO design. 65 outgoing relationships to the same object seems a
little off, even in straight Java OO. That sounds more like a
collection of objects to me.
e.g. Would denormalizing my 1 --> 65 --> 2 tables into a single
table help?
Maybe, but it might also hurt. You don't want to just blindly
assume something and start slapping your code around.
Or would a better suggestion be to use "blobs" for the 65-->2 part
so in the end I'd have a relationship of 1-->1 blob? If the blob
route, can I assume you wouldn't want the optimistic locking to
check the blob for changes?
For me, that would be a last resort.
Chuck
A common experience with large and complex object model, is that
people model their objects, then do a large fetch and find out that
bringing in a large set of EO's can be really slow .
Adapt your model
When you are going to be using a relational to object mapping
library (like EOF), you should expect that this will change your
requirements; enough that you can adapt your model to fit the tool.
If fetching an EO is heavy/slow, then generally the fewer objects
you bring in, the faster your system will perform. So if you
collapse and denormalize many small tables, into a few bigger ones,
you will be doing fewer EO loads, and probably dealing less with
all that fault and relationship management of all those little
fragments and relationships; which can result in performance savings.
You can do this a little by flattening relationships, or using
views in the database to make things appear to be flatter than they
are; or you can go right to model and actually flatten. Arguments
can be made for each, depending on your requirements.
You can even go further, and start moving complex data structures
and relationships into blobs that you manage yourself. This
offloads EOF from managing them, and often allows you to speed
things up; but the cost is more code maintenance on your part, and
of course denormalizing can negatively impact the design so you
want to be careful about how zealously you go down this path.
On Dec 17, 2008, at 12:39 PM, Chuck Hill wrote:
On Dec 17, 2008, at 9:25 AM, Jeff Schmitz wrote:
Yes, now that I think of it, there is one of these "crazy" joins
that's probably coming into play that joins each of my 7000 rows
with 65 rows in a different table, so that table must have about
450,000 rows. Any good optimization approaches for these type of
one to "very many" relationships? recursive fetch? I can see
this table getting into the many millions of rows real fast.
I'd spend some quality time considering a different design. I
doubt it is doing a crazy join. My money would be on Mike's
prediction of insane amounts of rapidly scrolling SQL.
ERXBatchFetching can be a big help here, properly used.
Chuck
On Tuesday, December 16, 2008, at 11:09PM, "Chuck Hill" <email@hidden
> wrote:
Either some crazy joins with other tables or something you are not
aware of is going on. 7K rows is tiny.
Chuck
On Dec 16, 2008, at 9:07 PM, Jeff Schmitz wrote:
hmm, I'm not doing an insert at all, just a read. Kind of
thought
there must be something else too though (with my limited
experience)
but figured indexing would be a good thing to do regardless
before
digging into debugging the real culprit here.
Jeff
On Dec 16, 2008, at 11:01 PM, Mike Schrag wrote:
More than a minute to insert to a 7000 row table?
Do other operations on the same DB take an appropriate amount
of
time? If not I would start looking at DNS or other
connectivity
issues. I can't fathom a FB DB sucking at that level.
this was my first thought, too ... something else is going on
here. I suspect if sql debug was turned on, you'd see tons of
faulting going on that you didn't realize and that the insert
itself is not actually the slow thing.
ms
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (Webobjects-
email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
Chuck Hill Senior Consultant / VP Development
Practical WebObjects - for developers who want to increase their
overall knowledge of WebObjects or who are trying to solve
specific
problems.
http://www.global-village.net/products/practical_webobjects
--
Chuck Hill Senior Consultant / VP Development
Practical WebObjects - for developers who want to increase their
overall knowledge of WebObjects or who are trying to solve
specific problems.
http://www.global-village.net/products/practical_webobjects
--
Chuck Hill Senior Consultant / VP Development
Practical WebObjects - for developers who want to increase their
overall knowledge of WebObjects or who are trying to solve specific
problems.
http://www.global-village.net/products/practical_webobjects
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden