[Beowulf] CCL:Question regarding Mac G5 performance (fwd from mmccallum at pacific.edu)
Michael Huntingdon
hunting at ix.netcom.com
Wed May 19 23:05:12 PDT 2004
I've spent some time sifting through the attached numbers. Though not each
lends itself to hp Itanium 2, there appears to be a very balanced trend.
For clusters, scale down the hp rx2600 to the rx1600 dual CPU with fewer
PCI-X slots, comparable performance, HP-UX/linux/OVMS, in box upgrades,
excellent services, and education price under $3,000.
I'm at a loss as to why there is not a great deal more conversation around
these systems. hptc clusters should lend themselves to easy additions and
upgrades. An investment in Pentuim, SPARC, Athlon, or Opteron yesterday,
should easily include G5, Itanuim 2 or futures toward cluster enhancements.
From the attachments, it looks to me as though the Itanium 2 numbers are
(at least) compelling, with a long term commitment to the hardware
technology as well as the operating systems.
Cheers
Michael
At 03:13 PM 5/19/2004, Bill Broadley wrote:
> > I had done some comparisons between price/performance, and I found dual
> > G5s to be at or near the best in price/performance, especially if
> > things are recompiled with the IBM compilers (between 8-10 % speed
> > increase over pre-compiled (with apple gcc) versions of NAMD2, and
> > using standard gcc for charmm). I would expect that things like
> > gaussian03 run very well (I believe gaussian uses the IBM compilers for
> > macOS). For MD, the speedup seems to be due to the on-chip square root
> > eval.
>
>Any actual numbers would be very useful, ideally with the compilers,
>compiler options, motherboard, and similar. Did you compare 1 job per
>node vs 1 job per node? Or 2 jobs per node vs 2 jobs per node? 4 or 8
>dimms? Which kind of memory PC2700? PC3200?
>
> > The built-in Gigabit enet is attractive, also, as charmm and NAMD scale
> > very well with gigabit, and it makes myrinet less price-effective (when
> > used on any platform, wintel included, see
> > http://biobos.nih.gov/apps/charmm/charmmdoc/Bench/c30b1.html for
> > example). I decided that dual G5 xserve cluster nodes with gigabit
>
>I come to a different conclusion based on those graphs. In the first
>graph myrinet improves by a factor of 2.6 (250 -> 95 seconds) from 2
>processors to 8, where gige improves by only 20% (255 -> 210). In the
>second graph gigE gets SLOWER from 2 to 8 processors. Do you think in
>either case the 8 node (let alone 16) gige cluster would have better
>price/performance then a myrinet cluster?
>
>Seems like for applications like shown on that page you shouldn't
>really bother with a cluster over 2 nodes with gigE, not many people
>would be willing to pay a factor of 4 more for 20% or even negative
>scaling.
>
> > switches were much more cost-effective for me than any other processor,
>
>Cost effective = price/performance? Can you make any numbers available?
>
> > especially any high-bandwidth specialty comm method (apple's gigabit
> > has a pretty low latency also).
>
>Oh? Can you share the apple gigabit numbers? What is "pretty low"?
>
> > Additional considerations for us were the BSD environment which is more
> > secure than windows, and the OS is arguably more stable and supported
>
>I'd agree with more stable and supported for use as a desktop, I'd
>disagree with stable and supported as computational node. OSX is the
>new player on the block in this space. Do you really think you would
>get a good response from calling apple's tech support line when the
>scheduler or network stack isn't performing to your performance
>expectations?
>
>Certainly a very reasonable thing.
>
> > It is my impression that opterons, PIVs, G5s all have their advantages,
>
>Agreed, thus the value in sharing performance actual results for
>specific codes in specific environments.
>
>--
>Bill Broadley
>Computational Science and Engineering
>UC Davis
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit
>http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Benchmark.daresbury.pdf
Type: application/pdf
Size: 337664 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20040519/5461552c/attachment.pdf>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: compchem.pdf
Type: application/pdf
Size: 1454990 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20040519/5461552c/attachment-0001.pdf>
-------------- next part --------------
*********************************************************************
Systems Performance Consultants
Michael Huntingdon
Higher Education Technology Office (408) 294-6811
131-A Stony Circle, Suite 500 Cell (707) 478-0226
Santa Rosa, CA 95401 fax (707) 577-7419
Web:
<<http://www.spcnet.com>http://www.spcnet.com>
hunting at ix.netcom.com
*********************************************************************
More information about the Beowulf
mailing list