[Beowulf] Small form computers as cluster nodes - any comments about the Shuttle brand ?
Tony Travis
a.travis at abdn.ac.uk
Sun Aug 9 04:30:11 PDT 2009
Joe Landman wrote:
> David Ramirez wrote:
>> Due to space constraints I am considering implementing a 8-node (+
>> master) HPC cluster project using small form computers. Knowing that
>> Shuttle is a reputable brand, with several years in the market, I wonder
>> if any of you out there has already used them on clusters and how has
>> been your experience (performance, reliability etc.)
>
> The down sides (from watching others do this)
>
> 1) no ECC ram. You will get bit-flips. ECC protects you (to a degree)
> against some bit-flippage. If you can get ECC memory (and turn on the
> ECC support in BIOS), by all means, do so.
Hello, Joe and David.
I agree about the ECC RAM, but I used to have six IWill dual Opteron 246
SFF computers in a Beowulf cluster and these do have ECC memory.
> 2) power. One customer from a while ago did this, and found that the
> power supplies on the units were not able to supply a machine running
> the processor and memory (and disk/network etc) at nearly full load for
> many hours. You have to make sure your entire computing infrastructure
> (in the box) fits in *under* the power budget from the supply. This may
> be easier these days using "gamer" rigs which have power to handle GPU
> cards, but keep this in mind anyway.
Absolutely, and that is why I said "I used to have" above!
The Iwill's have custom PSU's that are badly overrun, and when they die
it's very expensive to get them repaired. Eventually, they can't be
repaired: I've had several returned now as "beyond economic repair", and
I've decided to retire the IWill's.
It's a pity, because the IWill's are nice machines. However, even with
55W Opteron 248HE's fitted the IWill Zmaxdp can't keep the CPU's cool
under load unless they have their extremely noisy fans running at full
speed. I've kept one Zmaxd2 for desktop use with dual Opteron 246HE's
and it's fine, unless you make it work hard ;-)
> 3) networks. Sadly, the NICs on the hobby machines aren't usually up to
> the level of quality on the server systems. You might not get PXE
> capability (though these days, I haven't seen many boards without it).
Well, the IWill's are/were server-grade machines with GBit NIC's and
they do PXE boot.
> Just evaluate your options carefully with the specs in hand. You will
> have design tradeoffs due to the space constraint, just keep in mind
> your goals as you evaluate them.
I really would avoid SFF systems as compute nodes: I've just used Tyan
ATX FF S3970 motherboards in pedestal cases on industrial shelving and
you bear in mind that standard Shuttle cases are only 50% the size of an
ATX case. You can get four ATX cases in the space occupied by your eight
Shuttle SFF computers...
Bye,
Tony.
--
Dr. A.J.Travis, University of Aberdeen, Rowett Institute of Nutrition
and Health, Greenburn Road, Bucksburn, Aberdeen AB21 9SB, Scotland, UK
tel +44(0)1224 712751, fax +44(0)1224 716687, http://www.rowett.ac.uk
mailto:a.travis at abdn.ac.uk, http://bioinformatics.rri.sari.ac.uk/~ajt
More information about the Beowulf
mailing list