Re: Optimizing Core Data for large time series
Re: Optimizing Core Data for large time series
- Subject: Re: Optimizing Core Data for large time series
- From: Peter Passaro <email@hidden>
- Date: Thu, 10 May 2007 14:40:41 +0100
I wanted to report back after a little more experimentation. I have
found the optimum BLOB size for my application to be about 1Mb - the
import and data access speed is reasonable at this size. What is
concerning me now is that the storage overhead seems to be pretty
hefty for placing these BLOBs inside an SQL persistent store. For
each 1Mb BLOB placed in the store, I am adding roughly 20Mb to the
SQL file. Can anybody give me some direction as to what all that
overhead is? Is this typical for NSData objects stored as BLOBs in
SQL stores?
On 8 May 2007, at 18:19, Kaelin Colclasure wrote:
Another option you might consider is an entity which chunks
together several samples in an NSData field, but which is still
stored as part of the Core Data store. Each stream would contain
one or more chunk entities. This should yield better performance
without requiring you to roll your own infrastructure for managing
BLOB data in separate files.
HTH,
-- Kaelin
Peter Passaro
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden