Re: [Fed-Talk] A suggestion for X-SERVEs
Re: [Fed-Talk] A suggestion for X-SERVEs
- Subject: Re: [Fed-Talk] A suggestion for X-SERVEs
- From: Michael Kluskens <email@hidden>
- Date: Thu, 17 Feb 2005 09:39:24 -0500
On Feb 17, 2005, at 12:32 AM, Michael Pike wrote:
After cutting my hands to hell, a lot (and I mean A-LOT) of fowl
language, and wondering how the same people who developed the
beautiful powerbook could come up with the X-SERVE rack mounts...
You haven't tried to mount an Xserve in a SGI Origin rack. We're
limited for space and adding a rack just for one Xserve and an Xserve
RAID would squeeze something else, like a workstation work area for
example. Given that our SGI Origin had an opening with just enough
vertical space to hold the two items we were highly motivated.
Apple ships the Xserve with a set of rail components that should fit
almost anything, but strangely enough the rails in a SGI Origin rack
are spaced precisely such that making Apple's rails work was nearly
impossible; however, it is possible to do but not unless you really
have the skills.
The next step will be to move the Xserve & RAID to a SGI Altix rack,
fortunately that rack is only half loaded but I haven't measured it's
rack yet to see how hard that will be. Our SGI Origin is on it's way
out the door once all codes are validated on the Altix, basically it's
a space and heat issue, we don't have the space for both machines and
we can't cool both machines except during the winter and the
performance difference is just to great (we run machines past the
official lifetimes of computers, i.e. 5 years)
How does a G4's & G5's compare with SGI Origin's and Altix's?
I've only run one set of benchmarks using a single Fortran 90 FDTD
(Finite-Difference Time-Domain) code with no Altivec hand
optimizations. For a 910 MB calculation, it's 5.1 hrs on a single
processor of a SGI Origin with 300 MHz R12K MIPS (upgraded 1998), 4.6
hrs on a single processor of dual 1 GHz G4 MDD (purchased 2002), 2.7
hrs on a SGI Fuel 600 MHz R12K, 1.6 hrs on a single processor of a dual
2 GHz G5, 1.3 hrs on a 1.8 MHz P4 Dell/Linux, 0.44 hrs on a single
processor of dual processor SGI Altix 1.4 GHz Itanium2 (purchased
2004). I also have a dual processor 1.4 GHz Opteron cluster, but I
made the mistake of running the tests on the master node rather than a
client node.
Not the performance comparison you expected, that's because FDTD moves
massive amounts of data to and from the main memory continually, in one
test comparing identically spec'd Dell PC's there was a factor 2 or 3
difference, it appears the key item was that I had ordered one machine
with the best memory possible and the other machine had been purchased
with the standard memory.
So why do we have a Xserve and RAID? Because we needed a backup system
and many different tape technologies have failed us in the past so in
the opinion my boss and I hard disk backups were the way to go and I
convinced my boss to get the Xserve assuming the likelihood of hardware
failure was much lower then equivalent generic PC hardware and their
RAIDs. The backup system is composed of scripts using rsync with the
-b option.
I expect the G4's and G5's to do much better when I run benchmarks
using other codes that are not so memory intensive. A dual processor
SGI Altix cost us 4 times the price of a similarly configured dual G5
Xserve ($20000 vs. $4300) but the vast majority of our codes take
advantage the NUMA memory of the SGIs (in other words none of our codes
are MPI based yet, and only one is OpenMP based, the others use
automatic parallization so only on the SGI NUMA machines can our codes
run on 8 to 32 processors (there apparently are other machines that
could do this but they have not made it in our door, other than the DEC
machines which are no longer made). By all reports the Xserves are a
good choice for MPI-based codes and we're moving that direction but
code development costs far more than even high end SGI computers (in to
8 to 32 CPU range) (on the other hand I've seen organization buy a
$200K code that runs on a Windows PC, that's per computer, even for
normal desktop usage the computer is only part of the total cost,
software and maintenance charges can wipe out any hardware savings,
real or imaginary--imaginary hardware savings are those created by
buying cheap computers that break more than average requiring
maintenance charges that are not true maintenance).
Should Apple create machines with larger number of CPU's that access
the same memory space I might eventually be able to show my bosses
benchmarks with our codes were Apple wins on cost and performance. But
I expect to have at least one internal code moved to MPI before Apple's
gets into that market and then I'll see what numbers come up.
Also, personally I can't see if Apple could make money in that segment
of the market, look at SGI and their stock price.
I was very happy to see the G5's that could handle 8 GB. Very few
people, even long time Mac users (or writers for Mac magazines)
remember that Apple's first machine that could hold 1.5 GB was released
in 1995 (PowerMac 9500), every so often I'll see claims about some
later machine as being first (maybe officially first, the 9500 could
only officially handle 768 MB). Apple has a long history of releasing
machines that can hold more RAM then the currently shipping "low-end"
workstations available from companies like SGI and DEC (remember them,
workstations limited to 24 MB when the Quadra 700 could hold 68 MB).
Hopefully I didn't go to far afield with this message.
Michael
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Fed-talk mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden