[Beowulf] Which distro for the cluster?
Leif Nixon
nixon at nsc.liu.se
Tue Jan 9 08:41:20 PST 2007
Joe Landman <landman at scalableinformatics.com> writes:
> Login nodes are not and should not be administrative nodes. That is, do
> not trust login nodes for non-end-user accounts. This is a nice idea,
> and sadly, not implemented in most practice. Rocks and other cluster
> distros happily enable end user login to the cluster administrative
> node. A login node is like a compute node, though bad-people (tm) can
> get to it. Which means you should trust it less. If you fuse this with
> an admin node, then you increase your risk.
Full agreement here.
>> Trying to estimate the risk of somebody exploiting a particular
>> vulnerability can be very hard.
>
> No. Follow the cert/secunia/... lists. See what is being exploited in
> the wild. Won't be perfect, but if it is not being exploited (not that
> cert et al are perfect or reliable ahead of time), or is very hard if
> not impossible to exploit (e.g. the cache based back channel attack on
> SMP systems), then your risk is low. Risk is inversely proportional to
> the ease of exploit. The easier it is to exploit, the higher the risk.
OK, let's say it's just hard for me, then. 8^) I think there are
obvious high-risk vulnerabilities (remote exploits in sshd) and
obvious low-risk ones (like your example), and then a sea of in-betweens.
>>>> I don't get this. What's the point of having a "secure" frontend if
>>>> the systems behind it are insecure? OK, there's one big point -
>>>> hopefully you can buy some time - but other than that?
>>> Its the model of how you use the machine. If you lock all the doors
>>> tight with impenetrable seals, and the attacker goes through the weaker
>>> windows, those impenetrable seals haven't done much for you.
>>
>> Exactly. But you seem to propose to seal the login node tight, but
>> leave the windows on the compute nodes ajar.
>
> Nope, the analogy is incorrect and inaccurate, as is your
> characterization of what I am writing.
Sorry, not intentional. Reading the rest of your post it seems we are
mostly in agreement.
I might have been reading too much into something you wrote a bit
earlier in the thread:
| You have a perfectly valid reason to upgrade threat facing nodes. Keep
| them as minimal and as up-to-date as possible. The non-threat facing
| nodes, this makes far less sense. If you are doing single factor
| authentication, and have enabled passwordless access within the
| cluster: ssh keys or certificates or ssh-agent based, once a machine
| that holds these has been compromised, the game is over.
I interpreted this as saying "compute node vulnerabilities aren't that
important as long as the login node is secure", and this is what I've
been arguing against. Basically, there *aren't* any non-threat facing
nodes. But I guess I'm misunderstanding you.
> You seemed (maybe I misread or misunderstood you) that
> multiple perimeters are the way to go. I disagree with this. I am also
> of the opinion that "force" as it were, is best applied where it makes
> the most sense. Making the end users slog through using a system along
> with the nasties seems not to be a solution that most would like. There
> are other alternatives, some very good, that limit the maximum possible
> damage a user can do.
I'm not sure what you mean by "multiple perimeters", but I suspect I'm
not proposing them. 8^)
There is a mindset (which I'm carefully not accusing you of sharing)
which leads people to say things like "There's no point in fixing
$VULNERABILITY, because to exploit it the attacker must have root on
$MACHINE_X, and then we are already screwed". This irritates me.
Instead, assume $MACHINE_X *will* be rooted and try to limit the
damage the attacker can cause and make him work uphill (thus my
Churchill quote about beaches and so forth). But this is of course
exactly what you are saying.
--
Leif Nixon - Systems expert
------------------------------------------------------------
National Supercomputer Centre - Linkoping University
------------------------------------------------------------
More information about the Beowulf
mailing list