Re: SQLite performance issues.
Re: SQLite performance issues.
- Subject: Re: SQLite performance issues.
- From: Ruslan Zasukhin <email@hidden>
- Date: Tue, 29 Nov 2005 22:17:51 +0200
- Thread-topic: SQLite performance issues.
On 11/29/05 9:38 PM, "Bill Bumgarner" <email@hidden> wrote:
>>>> But, SQLite can also scale; up to 16 terabyte databases with
>>>> billions of tables, billions of columns per table, and billions of
>>>> rows per table.
>>>
>>> Bill, this is only technical specification.
>>> It means that SqlLite uses
>>>
>>> ulong -- as counter of tables.
>>> ulong -- as counter of fields
>>> ulong -- as counter of records.
>
> That is incorrect. Keep in mind that SQLite is optimized for
> embedded use. Such waste of bits would be catastrophically offensive
> to those folks using SQLite on embedded devices with less than 64MB
> of combined flash and RAM.
:-) Valentina was born in times when computer did have 32MB RAM.
> SQLite adapts the size of said counters and indices based upon the
> size of the dataset. Keys and indices will be 1 byte for very small
> datasets (less than 128 items).
You say this like a miracle.
All normal dbs have many data types.
I think you have note that if SqlLite 2 did have numeric types as string,
Then SqlLite 3 have made step to normal dbs, and now have integer as
integer.
Valentina for example, have Boolean field, which eat one disk (and in RAM)
one bit for each record. I hope you know that all other DBs as Oracle, MS
SQL, mySQL do use bitmap to store bits of Boolean fields. So if you have in
table one boolean field, you loose 7 bits. If you have 2 boolean fields, you
loose 6 bits per record. Valentina always use one bit per record.
So believe me, Valentina have much more sophisticated storage schemas of
information.
Or another example. Valentina uses zip compression of TEXT field. This allow
you save disk space, while still have index on such field.
>>> But this technical specification do NOT means in any way that you really
>>> can make such huge dbs. You will wait hours and days for answers. Who need
>>> this?
>
> That has not proven to be true in practice. There are many people
> who are using SQLite quite successfully across multi-gigabyte databases.
Can you point at least one? And parameters of his dbs and system?
And what kinds of queries he do? And how many RAM that systems have?
Point is that if you do query as:
select from T where fld = value
Then even FileMaker will be fast. Because only one record is found.
But try to make query which return million record from 10 millions.
Or sort that found million by fields fields. Or make joins and sorts of
joins (group by) the same
Bill, and I wonder, why you try prove that SqlLite is terrible fast, if at
least 2 cocoa developers have see its slowness for even relatively small
dbs?
------------------
Another fun I love about SQlLite, -- benches on their site against mySQL.
It seems they do bench on 1000 records.
People !!!
We bench disk based DBs?!
If yes, then excuse me, to see real performance of disk based DBMS you must
test it on dbs which is AT LEAST 2-3 times bigger of RAM. Point.
>>>> As per the performance claims, I would love to see the code used to
>>>> do the test. Please post or send 'em to me in a private email.
>>>
>>> That was REALbasic projects.
>>> Is that okay ?
>
> Sure. That may be the source of the performance issue, too. I have
> no idea how efficient the RealBasic interface might be. In any case,
> that it is RealBasic makes it a bit outside of the realm of cocoa-
> dev. There are also
Bill, this is not RB issue. We have did similar benches in Director and
Revolution also. Results are comparable.
Try self do simple test:
* make db with one table and say 5-10 fields.
* make loop to insert million dummy records.
db will be about 100MB.
And play with queries as DISTINCT on all fields. Or on field which range
1..255
Also the worse things I have see in that bench is SqlLite build result in
RAM. So to answer it have eat 600MB of RAM on my DUAL/G5 with 1Gb
It is obvious that SQlLite is fast only until db is smaller of RAM and
results of query are smaller.
Make db in 10 millions records to get 1 Gb. And see if time of query have
grow at least linear in 10 times? I think you will see that it grow more
than in 10 times.
--
Best regards,
Ruslan Zasukhin
VP Engineering and New Technology
Paradigma Software, Inc
Valentina - Joining Worlds of Information
http://www.paradigmasoft.com
[I feel the need: the need for speed]
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden