No subject
Thu Jun 12 22:07:40 PDT 2014
consumption of each node. We have an Athlon Cluster(1.33 GHZ) and those
things are very power hungry. We underestimated how much they draw and had
to install an extra circuit in addition to the ones we had planned for...
Perhaps there is yet another efficiency measure - MFlops/kwatt or something
like that.
So for the I-cluster that would have been
75 Gflops/(210 * 50 W/1000) = 7.14GFlops/kW
Roger, out of curiosity could you do a similar calculation?
Thanks,
Scott Shealy
----- Original Message -----
From: "Roger L. Smith" <roger at ERC.MsState.Edu>
To: "RICHARD,BRUNO (HP-France,ex1)" <bruno_richard at hp.com>
Cc: "'Scott Shealy'" <sshealy at asgnet.psc.sc.edu>; <beowulf at beowulf.org>
Sent: Thursday, September 27, 2001 2:38 PM
Subject: RE: Paper showing Linpack scalability of mainstream clusters
>
> Bruno,
>
> Hmmm, I thought that some of the others were purely ethernet based, but
> after doing some quick research, I guess I'll stand corrected.
>
> If you intend "mainstream" to mean only single-processor desktop-type
> machines, then I'll completely concede your point.
>
> As of the last Top 500 list, our cluster was listed as 158th, and is
> entirely ethernet based, using a single 100Mb/s interconnect to each node,
> and GigE interconnects between switches. However, our nodes are 1U with
> dual processors, so I guess maybe it doesn't fit the definition of
> mainstream that you stated. It doesn't predate the I-cluster, but it does
> at least tie with it. :-)
>
> -Roger
>
> On Thu, 27 Sep 2001, RICHARD,BRUNO (HP-France,ex1) wrote:
>
> > Hi Roger,
> >
> > What we mean by "mainstream" is that these could be your grandma's
machine,
> > unmodified except for the Software.
> > Actually I-Cluster *is* the first cluster of this type to enter the
TOP500.
> > Some PC-based clusters are already registered there of course, but they
> > cannot be called "mainstream": Most require specific (non-mainstream at
> > all!) connectivity such as Myrinet, SCI, Quadrix... Some are based on
PCs
> > equipped with several LAN boards (not mainstream either).
> > If you restrict to off-the-shelf monoprocessor (excluding non-mainstream
> > Alpha, MIPS and such) interconnected through standard Ether100, no
cluster
> > ever entered the TOP500 list. And I-Cluster is still the only one to be
> > there. Let me know if you think otherwise.
> >
> > Regards, -bruno
> > _____________________________________________
> > Bruno RICHARD - Research Program Manager
> > HP Laboratories
> > 38053 Grenoble Cedex 9 - FRANCE
> > Phone: +33 (4) 76 14 15 38
> > bruno_richard at hp.com
> >
> >
> > -----Original Message-----
> > From: Roger L. Smith [mailto:roger at ERC.MsState.Edu]
> > Sent: Thursday, September 27, 2001 15:33
> > To: RICHARD,BRUNO (HP-France,ex1)
> > Cc: 'Scott Shealy'; beowulf at beowulf.org
> > Subject: RE: Paper showing Linpack scalability of mainstream clusters
> >
> >
> > On Tue, 18 Sep 2001, RICHARD,BRUNO (HP-France,ex1) wrote:
> >
> > > Sorry Scott, I sent you a wrong reference. The actual link is
> > > http://www.hpl.hp.com/techreports/2001/HPL-2001-206.html. Enjoy,
> > > -bruno
> >
> >
> > I'd be REALLY interested in hearing how you justify the following
statement
> > in the paper:
> >
> > "Being the first ones to enter the TOP500 using only mainstream hardware
> > (standard PCs, standard Ethernet connectivity)...".
> >
> >
_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_
> > | Roger L. Smith Phone: 662-325-3625
|
> > | Systems Administrator FAX: 662-325-7692
|
> > | roger at ERC.MsState.Edu
http://WWW.ERC.MsState.Edu/~roger |
> > | Mississippi State University
|
> > |_______________________Engineering Research
> > |Center_______________________|
> >
>
>
> _\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_\|/_
> | Roger L. Smith Phone: 662-325-3625
|
> | Systems Administrator FAX: 662-325-7692
|
> | roger at ERC.MsState.Edu http://WWW.ERC.MsState.Edu/~roger
|
> | Mississippi State University
|
> |_______________________Engineering Research
Center_______________________|
More information about the Beowulf
mailing list