Re: [Core Data] Improve save performance ?
Re: [Core Data] Improve save performance ?
- Subject: Re: [Core Data] Improve save performance ?
- From: Eric Morand <email@hidden>
- Date: Tue, 17 Jan 2006 18:04:35 +0100
Actually I was just about to ask about that. I have never done
that, but I assume that it shouldn't be that hard.
Pretty easy actually.
I've detached the save call to another thread and, well, it works
perfectly and now when the user click on the "submit" button, there
is no pause for him since the save is done in the "background" thread.
That seems to solve all my problems.
Eric.
Le 17 janv. 06 à 17:47, Kay Roepke a écrit :
On 17. Jan 2006, at 11:20 Uhr, Eric Morand wrote:
Anyway, I'm wondering if I'm not going to migrate the save command
to another thread. Do you think that could work ? Do you think
that could create other problems ?
Actually I was just about to ask about that. I have never done
that, but I assume that it shouldn't be that hard.
Another approch could be to use a intermediate store. I just
thought about that, I'm not sure whether it's a good idea at, and
all that ;)
It would work something like this:
Requirements:
1) You want to commit often and don't want the data make the round
trip to your SQLite backing store for performance reasons.
2) You have backing stores which are strong at writing but weak at
retrieving data.
Thus you commit the data to this permanent store first, rest
assured that your data is safe and the push those objects
into the slower backing store asynchronously at certain intervals
(be it time-based or count-based, or both)
Of course, you would not be able to see those objects in the SQLite
store until they are copied over, but you might be able to
combine the stores, or do two fetches. After all the intermediate
store will only hold few objects at any given time.
This way you wouldn't even lose data if the SQL backend is down,
and could keep working.
One possible show-stopper could be object IDs if there are
automatic ids involved. (This is a common problem I run into
when the insert/update load on master databases in a replication
setup get too big...). In that case you would need to
generate the keys yourself and make sure they're unique across the
system.
Again, I don't know how applicable this is to CoreData, but might
give someone some idea, that's for sure.
Comments? Critique?
- k
=============================================
Automator Ketchup : http://automatorketchup.blogspot.com
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden