Re: Splitting big database tables to speed up?
Re: Splitting big database tables to speed up?
- Subject: Re: Splitting big database tables to speed up?
- From: OC <email@hidden>
- Date: Sat, 09 Jul 2016 12:13:21 +0200
Mark,
On 9. 7. 2016, at 5:23, "Morris, Mark" <email@hidden> wrote:
> Just a thought. What about dynamically creating a new EOEntity for each group of T_RECORD records represented by a T_TABLE record?
I sort of fear this might bring more problems than those which it would solve. For one, it would mean each DBTable eo would have its "records" relationship leading into another target DBRecordXX entity; I am not sure whether this can be modelled at all?
> Then you could add a qualifier to the EOEntity that would restrict it to the correct subset, and it would be included automatically in the SQL.
There are two rather separate things.
First, the “qualifying relationship” task of mine: here it would sort of help, but only if the different target entities were session-based. Nevertheless, in this case I would not need separate entities for different tables; I would need separate entities for different sessions. Inside one session, all tables need to be filtered the same way.
Second, the issue of this thread, “splitting big database tables to speed up”: here it would not help at all. If I access my records through the "records" relationship, the filtering _already is_ done SQL-level, the generated SQL looking like "SELECT ... FROM T_RECORDS WHERE table_id = XXX". The point of this thread was whether it is possible somehow to dodge even this filtering (which is very fast, but on the other hand, the T_RECORDS table is very huge), by tricking EOF to replace it by "SELECT ... FROM T_RECORDS_XXX".
Keeping all the other things unchanged, especially still having one DBRecord entity modelled the same way -- just having much faster fetches. And joins in other scenarios, etc.
Thanks a lot,
OC
>> On Jul 8, 2016, at 6:10 AM, OC <email@hidden> wrote:
>>
>> Actually, I wonder...
>>
>> On 8. 7. 2016, at 10:11, OC <email@hidden> wrote:
>>> Alas, my DBTables contain _lots_ of DBRecords, and thus the above implementation would get terribly slow
>>
>> ... my setup is pretty standard, i.e.
>>
>> - DBTables are represented by rows in one DB table, say, T_TABLE
>> - all DBRecords of all tables are represented by rows in another DB table, T_RECORD, with a foreign key into T_TABLE.
>>
>> Therefore, whenever DBTable records are used anyhow, *always* the huge T_RECORD table is filtered by the desired foreign key. Of course there is an index on that column, but since the table contains zillions of records, it is still somewhat at the slow side.
>>
>> Conceptually, it would make much better sense to have a separate database table for records of each T_TABLE row: almost all operations need just records of one table; there is a very little or nothing to do over records of more different tables at once. Database-level, I can pretty well imagine something like many tables named T_RECORD_X where X would be e.g., the primary key of each T_TABLE_ROW. Nevertheless, I can see absolutely no reasonable way of EOmodelling such a thing? (Note that _tables_ -- not just rows -- would have to be added/removed to/from the schema on-the-fly, as users create/delete tables. Table creation/deletion _can_ be slow though; we need table processing to be fast.)
>>
>> Are there some well-tested tricks for such cases?
>>
>> Thanks again,
>> OC
>>
>>
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Webobjects-dev mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden