In answer to your question, each record is a "GenericRecord" which
contains an array of fields and a numfields count (which varies by
record). The table is a "GenericTable" that has an ivar that holds an
array of GenericRecords and a Description object that describes the
makeup of the fields. There's not much extra.
Have you considered using Core Data ? For generic table work like
you describe, it will handle this drudgery for you. Our objects work
extremely well with NSSortDescriptor, so the comparison function
shouldn't require any significant memory pressure (i.e. we won't
needlessly generate temporary or autoreleased objects)
We've done a lot of performance work for 10.5, and it should compare
favorably to anything being done with dictionaries or what you
describe. For example, our description objects (NSEntityDescription)
cache a perfect minimal hash algorithm for each different layout, and
we dynamically generate classes and perform some codegen for accessor
Generally on Leopard, Core Data is faster than fetching dictionaries
(even in a pure CF tool with raw C access to SQLite), in as much or
less memory. Core Data is much faster working with thousands of
objects on a multicore machine (like x2 to x4). If you have a Shark
sample that says otherwise, please file a bug report with
Or put another way, the overhead in Core Data is less than that of
Also, on 10.5, Cocoa Bindings has an option on the NSArrayController,
when in entity mode, for "Use Lazy Fetching", which automatically
does pretty much what Mike describes below. It identifies all the
objects that match the predicate, but only pulls them back in batches
With a table view and a lazily fetching array controller, you should
see something taking less than half your current memory, and more
than x4 faster than your current approach (based on the experiences
others transitioning, using about 50,000 objects)
With the array controller's lazy fetching, it's not hard to get
decent scalability with million rows using Core Data. If you're
willing to put some effort into optimizing, Core Data can scale to
10^7 on Leopard.
Core Data doesn't fit as a solution well if you need client server
access, or your schema is completely free form (i.e. each 'row' has
an arbitrarily different set of 'columns'), or you require a cross
platform solution (Windows, Linux, etc)
So the issue is how to get the data sorted so that I can archive it
in a useful order, at which point I can use your strategy of loading
bits at a time....
On Mar 13, 2008, at 3:21 PM, Mike Engber wrote:
Lately I've been working with large table views, hundreds of
thousands of records.
My approach has been to take advantage of the fact that the data
source only has to provide rows in sorted order - and not all of
the rows at once, just the ones that are requested - generally the
I do not keep all of my items in a sorted NSArray. They're stored
in my own data structure.
So, I guess the question is - what is the data structure you're
using for your records and do you really need to keep them in big
NSArray. A big NSArray of NSObjects has a fair amount of overhead -
something you may want to avoid if you can.
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden