[Beowulf] newbie's dilemma
Don R. Baker
donb at eps.mcgill.ca
Thu Mar 2 05:34:18 PST 2006
Hello again,
O.K. So Option 3 -- 32 desktops from HP or Dell-- is eliminated because
I cannot afford to upgrade the air conditioning unit in the room
available and I cannot afford an onsite service contract to cover repair
costs.
In response to RGB's request for more information:
"It might be more helpful if you gave us your
budget and your software constraints (e.g. how much memory per CPU or
core do you need). I'm assuming embarrassingly parallel MC (which is
what I do) so the network is basically irrelevant."
Here are my budgetary constraints and my needs
Budget ~US $ 25 000, with the possibility of "liberating" another $ 5
000 out of another grant or my university. My Monte Carlo simulations
deal with percolation problems, Potts models, and fiber bundle models,
some of which require in excess of 512 MB of memory; I am trying to buy
machines with at least that amount of memory per core and ideally twice
that amount. The network is irrelevant for these simulations, but based
upon my reading I think I should go for gigabit ethernet.
Thank you all for your thoughtful responses. I am finding them very
helpful.
Wishing you the best,
Don
On Wed, 2006-03-01 at 18:09, Robert G. Brown wrote:
> On Tue, 28 Feb 2006, Don R. Baker wrote:
>
> > for 8 years, but consider myself to still be a beginner. I have a room
> > with 4, 15 amp circuits and a 20 000 btu air conditioning unit installed
> > that I can use for the next 2 years, but after that I may need to find
> > another home for the system.
>
> Let's see. 20KBTU is a bit more than 1.5 tons of AC, call it the
> ability to remove 5800 Watts total. 4 x 15 x 120 is is 7200 Watts peak,
> or about 5000 Watts RMS. In my opinion this is going to leave you a bit
> light on AC if you run the circuits fully loaded, and don't forget warm
> bodies (60 W) and built in light bulbs etc. on other circuits (maybe
> several hundred W more). You have to not only remove the heat as fast
> as it comes in but get ahead some, correct for heat that infiltrates
> through the walls, and get the room temperature down below 20C (68 F) if
> at all possible. 15-16C is more like it -- cold enough to just be
> uncomfortable.
>
> If you limit what you run per circuit to roughly 1000 Watts, that is
> 4000 watts and gives you a bit of margin. Or get a bigger AC -- a 2 ton
> AC is still pretty cheap and would probably manage fully loaded
> circuits. Just a thought.
>
> > My dilemma is that for my budget I can buy one of the following
> > solutions:
> >
> > Solution #1
> > A custom built "personal cluster" with 8 dual core processors either
> > Xeons or Opterons (16 cores and 16 GB of memory) with all the software
> > installed, read to go.
> >
> > Solution #2
> > I can buy 16 workstations, each with Dual Core Athlon X64 4400+
> > processors (32 cores and 32 GB of memory) upon which I will probably
> > install either Warewulf or Oscar.
> >
> >
> > Solution #3
> > I can buy 32 HP or Dell "mass market" desktops running dual core chips
> > (64 cores and 64 GB memory) upon which I will probably install either
> > Warewulf or Oscar. (Note that I read the discussion this past November
> > on "cheap PCs this christmas")
> >
> >
> > Obviously, I get more computing power in the last two solutions, but at
> > what cost in terms of time and upkeep? Once the system is up and
> > running I can dedicate about 5 hours per week, and probably no more, and
> > CAD$ ~500 per year for maintenance.
>
> I personally would reject #3 out of hand, unless you buy three year
> onsite service contracts on the Dells (spending nodes as required).
> Dell doesn't do Opterons, I don't think, as well. HPs ditto.
>
> Solutions #1 or #2 are both reasonable, although I'm not sure where your
> numbers are coming from. It might be more helpful if you gave us your
> budget and your software constraints (e.g. how much memory per CPU or
> core do you need). I'm assuming embarrassingly parallel MC (which is
> what I do) so the network is basically irrelevant.
>
> > Do any of you have some sage advice? Have any of you used a "personal
> > cluster"? Any thoughts you may have will be very much appreciated.
> > Thank you all for your time.
>
> Sure, a bunch of us (myself included) have personal clusters, although
> yours is going to be mine -- I never have more than about 10 nodes
> because at that point my house starts to melt in the summertime (and the
> nodes start to cost roughly $1000/year just to run). Remember, power
> costs ballpark of $1/watt/year to heat AND remove the heat (within a
> factor of two) so if you DO fill your room to capacity with 4000 watts
> running 24x7, plan to spend around $4000/year just to run it and keep it
> cool.
>
> > Wishing you the best from a cool Montreal,
>
> Although there is that -- I suppose in the wintertime you could just
> open a window and snow-cool it... but that at most knocks it down to
> $3000, because most of the money is for the power, not the cooling:-).
>
> >From this point of view getting fewer, faster nodes (e.g. 8 dual-dual
> core processor from e.g. Penguin or ASL (32 processor cores) is likely
> to be a net savings in power, in money PAYING for power, high quality
> nodes are less likely to break, and less of your time doing both soft
> and hard maintenance. I'd really try to keep your system count down for
> home clusters as they can eat enough time and money to destroy personal
> relationships with loved ones...
>
> They don't have to be preinstalled with linux, though. Oh, they may BE
> preinstalled (often with SuSE) but I'd advise reinstalling Centos or FC
> (see archives for pros and cons of choice). That way you get an
> indefinite free update stream and full yum-ability. SuSE does yum
> (thanks to Joe Landman of this list, who might ALSO sell you prebuilt
> nodes) but it ain't necessarily pretty...
>
> rgb
>
> >
> > Don
> >
--
"Melting rocks today for a better tomorrow . . . "
Don R. Baker, Professor of Geochemistry, Earth and Planetary Sciences,
McGill University 3450 rue University, Montreal, QC Canada H3A 2A7 514-398-7485
More information about the Beowulf
mailing list