Version of gmake for clusters...
Bogdan Costescu
bogdan.costescu at iwr.uni-heidelberg.de
Thu Apr 12 05:24:29 PDT 2001
On Thu, 12 Apr 2001, Andreas Boklund wrote:
> The setup was 1-10 Machines running a mosix patched Redhat 6.2
> installation. The sourcetree was placed on a NFS volume that the
> head node shared with the rest of the nodes. All computers had
> Pentium III 500 processors, 128MB ram and they were interconnected by
> FastEtherenet, through a Cisco switch.
As somebody on this very list repeats quite often: "It all depends on your
application!" 8-)
Having fought parallel compiles recently (of the MD program CHARMM), I
can share some thoughts:
- as Josip pointed out, if the compilation is CPU bound, parallel compiles
are a big win. A recent experience on a supercomputer revealed that the
F90 compiler might take up to 45 seconds for a 30-50K source file (this is
with lots of optimizations enabled). Of course, in this case, even a
RS232 serial link is fast enough for transferring the file 8-)
- if your application consists of several directories with little or
no dependency between them, you can often get much better results if you
send each directory (with its own Makfile) to a separate node (this is
kind of "parallelize the outer loop" style). This also solves the problem
of serial operations: for a Makefile consisting of entries like:
x.c: x.h
cc -c x.c
ar -rucv y.a x.o
(IOW, all objects from one directory are put together in a library)
it's obvious that several concurent builds from the same Makefile might
step on each other toes in the "ar" phase. Any autoparallelizing-able
compiler will deny parallelization for "data dependency"-like resons;
seems like (at least some) make variants are not so smart...
The drawback is that if you have directories of very different sizes, the
load balancing will be poor.
- NFS is generally a poor solution for parallel builds. One might get a
good speed-up if the nodes have local disks or very large memory by doing
something like:
1. take a set of files with their dependencies and the appropiate Makefile
entries and copy them to a node's local disk (or ramdisk) through a TCP
pipe (not NFS).
2. on each node compile locally _all_ the files
3. send the results back to the master node, again through a TCP pipe.
(as a TCP pipe, 'tar | ttcp' works fine).
You get network congestion in the initial phase and, if the files are well
distributes for load balancing, also in the final phase; that's why I
suggest the TCP pipe - NFS works quite poorly for servicing lots of
clients simultaneously. You might also think of a tree-based file
distribution...
If you have very large memory on the nodes, ramdisks can also be used;
file caching (e.g. from NFS) is not so good, as the compiling process uses
lots of memory and can shrink the cache. If/when NBD will be properly
working, you might also do the scatter & gather phases as a copy between
NBD and local node's harddisk or ramdisk.
[ I tried this once with ramdisks (we have mostly diskless nodes) with
manual distribution of files through 'tar | ttcp' and it worked quite
well. But as we use our clusters for production runs and not for
development, I didn't try it further... ]
Sincerely,
Bogdan Costescu
IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: Bogdan.Costescu at IWR.Uni-Heidelberg.De
More information about the Beowulf
mailing list