Re: [Core Data] Improve save performance ?
Re: [Core Data] Improve save performance ?
- Subject: Re: [Core Data] Improve save performance ?
- From: Eric Morand <email@hidden>
- Date: Tue, 17 Jan 2006 11:20:54 +0100
You already know that committing one object at at time has an overhead
around .5s... so why are you saving one at a time? Change the test to
save at the end to see how long a bulk operation takes.
Why do you have to commit after every time? Why can't you just build
up a set of dirty objects and commit them on a more reasonable
schedule.
I've already explained why I can't take the risk to see data losses
by saving changes only when the application is quit.
What would be a reasonable schedule ? 5 minutes ? In 5 minutes, my
software is able to import thousands of transactions from XML files
(for examples). If the Mac fails at 4:59, this is a lot a work to be
redone by the user. Well, not a LOT of work, I admit, but still I
can't admit that a power failure could annihilate the work of my
users. I'm ready to loose the non-submited work, but not the
submitted one. That makes no sense for most users (and they are
right) that a submitted modification to the database is still at the
mercy of a power failure. That's just as inacceptable than a "Save
as..." menu command that would display a dialog "Your save command
will be written to the disk in 5 minutes because we'd like to group
it with another save commands to compensate the poor performance of
the database engine. Sorry about this. Pray that nothing bad happens."
Anyway, I'm wondering if I'm not going to migrate the save command to
another thread. Do you think that could work ? Do you think that
could create other problems ?
What is the point of this statement? Who has said anything about being
satisfied with anything... this type of statement can easily be read
to imply any one but you is an idiot.
I'm sorry about this, that's not what I wanted to imply. I was just
wondering why I couldn't find anything on the web about the poor
saving performances of the SQLite persistent store.
Eric.
Le 17 janv. 06 à 01:30, Shawn Erickson a écrit :
On 1/16/06, Eric Morand <email@hidden> wrote:
Anyway, SQLite persistent store is unusable for me. Here is what I've
been doing to test its performance :
- (IBAction) saveAction:(id)sender
{
NSEntityDescription * anEntity = nil;
NSManagedObject * anObject = nil;
int index = 0;
while ( index < 1000 )
{
anEntity = [NSEntityDescription
entityForName:@"Account"
inManagedObjectContext:[self managedObjectContext]];
anObject = [[NSManagedObject alloc]
initWithEntity:anEntity
insertIntoManagedObjectContext:[self managedObjectContext]];
NSLog (@"save");
[[self managedObjectContext] save:nil];
NSLog (@"end save");
index++;
}
}
Well, this process (saving 1000 different object in the store, one at
once) took...7 minutes !!!
OMG OMG (sorry !!! bugs me)
You already know that committing one object at at time has an overhead
around .5s... so why are you saving one at a time? Change the test to
save at the end to see how long a bulk operation takes.
Why do you have to commit after every time? Why can't you just build
up a set of dirty objects and commit them on a more reasonable
schedule.
How can one feel satisfied with such subpar performances ?
What is the point of this statement? Who has said anything about being
satisfied with anything... this type of statement can easily be read
to imply any one but you is an idiot.
If you are concerned about the performance file a defect with a
reasonable real world implementation and/or open a Apple developer
technical support incident.
-Shawn
=============================================
Automator Ketchup : http://automatorketchup.blogspot.com
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden