[Beowulf] torus versus (fat) tree topologies
Chris Sideroff
cnsidero at syr.edu
Mon Nov 15 19:03:00 PST 2004
Mark Hahn wrote:
> incidentally, in the SCI testing you've done,
> were you using realistic problem sizes? I'm just
> pondering the volume/surface-area argument again,
> and it really seems like reasonably high-performance,
> high-memory nodes would *not* ever turn out to be
> latency-dependent for CFD. just because they have
> lots of inside-node work to do. if your testing was
> on toy-sized problems, the nodes would be constantly
> starving for work, and thus primarily latency-sensitive...
>
> regards, mark hahn.
I tested three different sizes of my own problems:
Small: ~250K quad cells, 3D coupled explicit solver, 6 equations (1
cont., 3 mom., 1 energy, 2 turb)
Medium: ~1.5M tet cells, 3D segregated solver, 8 equations (1 cont., 3
mom., 1 energy, 4 turb)
Large: ~8.0M mixed (quad, tet, prism) cells, 3D coupled implicit solver,
6 equations (1 cont., 3 mom., 1 energy, 2 turb)
This covered a fairly wide variety of grid sizes, grid topologies,
solver types and equations. The later two I would not call 'toy' size
and are probably considerably larger than typical 'engineering' size
problems - by 'engineering' size I mean simulations you can run in a
couple of days on a single PC. The later case especially - I would
_not_ consider 8 million cells small combined with the fact the implicit
solver (matrix inversion) is a memory hog. This case would not even
load on less than 16 processors because it had about a 20GB memory
requirement.
Chris Sideroff
More information about the Beowulf
mailing list