Re: Optimizing Core Data for large time series
Re: Optimizing Core Data for large time series
- Subject: Re: Optimizing Core Data for large time series
- From: Kaelin Colclasure <email@hidden>
- Date: Tue, 8 May 2007 10:19:24 -0700
On May 8, 2007, at 4:37 AM, Peter Passaro wrote:
- Converting the DataPoints into BLOBs (and removing them from the
data model) and keeping them in binary files which are referenced
by a new entity, DataChunk, which has parameters: fileURL,
timeBegin, timeEnd, numPoints. This creates other issues because I
might take a performance hit accessing individual time points for
processing, especially for non-sequential groups of points, but
opening a file and moving a file pointer should be faster than
fetching (am I right on this?)
Another option you might consider is an entity which chunks together
several samples in an NSData field, but which is still stored as part
of the Core Data store. Each stream would contain one or more chunk
entities. This should yield better performance without requiring you
to roll your own infrastructure for managing BLOB data in separate
files.
HTH,
-- Kaelin
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden