On Apr 15, 2009, at 17:32, Lachlan Deck wrote: On 16/04/2009, at 1:51 AM, Stamenkovic Florijan wrote:
It would also open up a can of worms too. With normal connections you can get an optimistic locking failure and can present this immediately to the user. In its absence you'd either not be able to use any locking at all (and thus the last update wins however old it may have been queued) or have the user deal with failed updates at some later date.
Problems: - will all clients connect again at the same time? No. - how would you determine the correct order of queued updates being applied? Can't. - how would you deal with failures? Is the user still there to deal with it? - Do they have to deal with every off-line update manually?
Even if it became read-only when the connection dropped you'd still have to send all changes simultaneously to the client cached db and the remote. I'm replicating data between clients and their web site. I can guarantee this won't be fun to implement nor be out of your hair in six months.
Yes, I see what you're saying. Managing update conflicts might be a killer. I originally thought of this in terms of replicating data from one persistence system to the other, didn't even consider using this in a full blown multi-user, concurrent-update scenario. But when John posted his question, I thought: "hey, that idea I had could handle this!" I guess I have a tendency to run my big mouth before thinking something through... In my defense, I put some qualification in my original reply, I said:
"The only problem is: while this *might* be possible, I bet that to bring it to a functional, usable implementation, it would take (me, alone) at least half a year of focused work."
;)
F |