[Beowulf] Contents of Compute Nodes Images vs. Login Node Images
John Hanks
griznog at gmail.com
Thu Oct 25 12:32:06 PDT 2018
Hi Ryan,
On the cluster I currently shepherd we run everything+sink (~3000 rpms
today) on all nodes since users can start VNC as a job and use nodes as
remote desktops. Nodes are provisioned with warewulf as if they were
stateless/diskless with any local SSD disk becoming swap and SAS/SATA goes
into a zpool which gets used for local scratch. Having everything
(including *-devel) in the image greatly reduces the amount of dependencies
to build in our NFS mounted, modules controlled software stack and most
things users want to do Just Work(tm). The node image is quite large and
requires at least 16G on a node to boot, but delivering it over 10 Gbe is a
marginal time cost relative to the rest of POST and booting. My experience
has been that a feature rich image makes all other aspects of cluster use
and maintenance a lot smoother.
Years ago when I started doing this people told me I was crazy to download
that huge image at each boot. Now they tell me I'm crazy because I don't
want to download a big container image with a full OS so I can use ubuntu
bash on a centos node in my job. I'm not yet convinced I'm the crazy one...
If, however, an org has a bunch of app support staff, extra sysadmins and
project managers to justify then a stripped down node image and the
resulting extra-complex, dependency filled software stack, the myriad
containers, etc., can be just the ticket for a powerpoint slide presented
up the ladder to justify that next FTE opening. For me it boils down to
what do I enjoy doing? X as a module or compiling multiple versions of qt
or gtk* from source to support something I'm building as a software module
do not make my list of fun times...'[yum|apt-get] install' FTW.
Best,
griznog
On Tue, Oct 23, 2018 at 10:49 AM Ryan Novosielski <novosirj at rutgers.edu>
wrote:
> We’ve always had separate images for the login nodes and compute nodes
> with Warewulf. We’re getting some complaints that there’s not enough stuff
> in the compute node images, and that we should just boot compute nodes to
> the login node image (this is problematic for other reasons, but that’s
> another story — the general consensus is they want all the same software).
> I know this happens relatively frequently (generally after a new service
> has been provided, so that perhaps some new libraries are needed that
> weren’t previously present) at our site, but nevertheless there’s been
> pressure to throw the whole kitchen sink in there. Was curious what other
> sites were doing.
>
> > On Oct 23, 2018, at 1:43 PM, Prentice Bisbal via Beowulf <
> beowulf at beowulf.org> wrote:
> >
> > Ryan,
> >
> > When I was at IAS, I pared down what was on the compute nodes
> tremendously. I went through the comps.xml file practically line-by-line
> and reduced the number of packages installed on the compute nodes to only
> about 500 RPMs. I can't remember all the details, but I remember omitting
> the following groups of packages:
> >
> > 1. Anything related to desktop environments, graphics, etc.
> > 2. -devel packages
> > 3. Any RPMS for wireless or bluetooth support.
> > 4. Any kind of service that wasn't strictly needed by the compute nodes.
> >
> > In this case, the user's desktops mounted the same home and project
> directories and shared application directory (/usr/local), so the user's
> had all the the GUI, post-processing, and devel packages they needed right
> on their desktop, so the cluster was used purely for running
> non-interactive batch jobs. In fact, there was no way for a user to even
> get an interactive session on the cluster. IAS was a small environment
> where I had complete control over the desktops and the cluster, so I was
> able to this. I would do it all again just like that, given as similar
> environment.
> >
> > I'm currently managing a cluster with PU, and PU only puts the -devel
> packages, etc. on the the login nodes so users can compile there apps there.
> >
> > So yes, this is still being done.
> >
> > There are definitely benefits to providing specialized packages lists
> like this:
> >
> > 1. On the IAS cluster, a kickstart installation, including configuration
> with the post-install script, was very quick - I think it was 5 minutes at
> most.
> > 2. You generally want as few services running on your compute nodes as
> possible. The easiest way to keep services from running on your cluster
> nodes is to not install those services in the first place.
> > 3. Less software installed = smaller attack surface for security
> exploits.
> >
> > Does this mean you are moving away from Warewulf, or are you creating
> different Warewulf images for login vs. compute nodes?
> >
> >
> > Prentice
> >
> > On 10/23/2018 12:15 PM, Ryan Novosielski wrote:
> >> Hi there,
> >>
> >> I realize this may not apply to all cluster setups, but I’m curious
> what other sites do with regard to software (specifically distribution
> packages, not a shared software tree that might be remote mounted) for
> their login nodes vs. their compute nodes. From what I knew/conventional
> wisdom, sites generally place pared down node images on compute nodes, only
> containing the runtime. I’m curious to see if that’s still true, or if
> there are people doing something else entirely, etc.
> >>
> >> Thanks.
> >>
> >> --
> >> ____
> >> || \\UTGERS,
> |---------------------------*O*---------------------------
> >> ||_// the State | Ryan Novosielski - novosirj at rutgers.edu
> >> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS
> Campus
> >> || \\ of NJ | Office of Advanced Research Computing - MSB
> C630, Newark
> >> `'
> >>
> >> _______________________________________________
> >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
> >> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20181025/878ba8f3/attachment.html>
More information about the Beowulf
mailing list