[Beowulf] Three notes from ISC 2006
Patrick Geoffray
patrick at myri.com
Wed Jun 28 11:23:32 PDT 2006
Hi Kevin,
Kevin Ball wrote:
> Patrick,
>
>>
>> From you flawed white papers, you compared your own results against
>> numbers picked from the web, using older interconnect with unknown
>> software versions.
>
> I have spent many hours searching to try to find application results
> with newer Myrinet and Mellanox interconnects. I would be thrilled (and
> I suspect others might as well, but I'm only speaking for myself) if you
> would take these white papers as a challenge and publish application
> results with the latest and greatest hardware and software.
Believe it or not, but I really want to do that. I don't think it's
appropriate to compare results from other vendors though: in Europe,
it's forbidden to do comparative advertisement (ie the soap X washes
more white that the brand Y) and I completely agree with the rationale.
However, there is nothing wrong into publishing applications numbers
versus plain Ethernet for example, and let people put curves side by
side if they want. Or submit to application specific websites like
Fluent. I will do that as soon as I get a decent sized cluster for me (I
have a lot of small ones of various cpus/chipsets but my 64 nodes
cluster is getting old). Time is also a big problem right now, but I
will have more manpower in a couple of months. Time is really the
expensive resource.
Most integrators have their own testbed and they do comparisons, but you
will never get these results, and even if you could have it, you could
not published it.
Recently, I have been thinking about something that you may like. With
motherboards with 4 good PCIE slots coming on the marketing (driven by
SLI and such), it could be doable to have a reasonably sized machine,
let's say 64 nodes, with 4 different interconnects in it. If Intel or
AMD (or any good will) would donate the nodes, and the interconnect
vendors would donate NICs + switch + cables, and a academic or
governmental entity would volunteer to host it, you could have a testbed
accessible by people to do benchmarking. The deal would be: you can use
the test bed but you have to allow your benchmark code to be available
to everyone and the code will be run on all interconnects and the
results public.
What do you think of that ?
Patrick
--
Patrick Geoffray
Myricom, Inc.
http://www.myri.com
More information about the Beowulf
mailing list