Re: Tip: Your next XRaid may be an SSD
Re: Tip: Your next XRaid may be an SSD
- Subject: Re: Tip: Your next XRaid may be an SSD
- From: Guido Neitzer <email@hidden>
- Date: Mon, 15 Dec 2008 23:09:23 -0700
On 15.12.2008, at 22:24, Pierce T. Wetter III wrote:
First, realize, that its not a perfect apples/oranges comparison,
because I didn't run the SQL test on our Production XRaid, but
rather on my local hard disk. (Because, well, the XRaid is
busy. :-) ) If you look at the XBench stats for Random Read/
Writes, you can see that the RAID does pretty well, it's 28 times
faster on writes then a single HDD, so that would mean the SSD is
only 4x faster instead of 93x for that stat. But still, 4x is
nothing to sneeze at.
It's definitely a pretty good result. That's absolutely true. As I
said, it would be interesting to see, what the actual limiting factor
for the HD setup is. Wether it is writing to the transaction log or
updating indexes (therefore reading a lot), or ... it might be
interesting to take a deeper look.
A lot of the SQL is basically saving a bunch of log file messages.
So that data can't be cached.
But it's also not something where disks are really slow. The XBench
test in that respect does probably not reflect properly what's really
going. But that's another issue.
Also it was a freshly started database in all cases, so the row
and disk caches probably weren't stabilized yet.
I see.
So yeah, there are a whole bunch of things that databases do to
take advantage of the 80/20 rule. But remember, this is the
performance of a single drive, and its kicking butt.
Yep. In this case. As I pointed out, there are things that are
definitely impressive and others where it needs to be determined why
the impact is so high - something seems to be inefficiently using the
drives here and the SSDs (as defined by their nature) handle that way
better than moving disks.
What I mainly wanted to say is that a 40x improvement is very
suspicious. Something is fundamentally wrong and the SSDs can play out
their strength: random access. But as I said - that shouldn't hit the
drives so badly as it does in your case.
I'd be very interested to see whether other databases are as strongly
affected as FrontBase with these tests. When you look at these results
here:
http://www.bigdbahead.com/?p=37
The pure read performance is very impressive. If you go to 50% read/
write, it's not so impressive anymore. These results are older and the
Intel SSDs are faster as far as I know, but again: lots of the queries
should be handled by caches. FrontBase can be tweaked in about every
aspect of caching individual things, but still building indexes and
inserting in indexed tables sucks.
Do you happen to know where the majority of the time is spent in your
szenario? I'm asking because I know a couple of things where FrontBase
lacks optimizations and it would be interesting to know whether you
hit those and if those are then better handled with SSDs (but I still
think that would be the wrong solution for them ...).
cug
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden