[Beowulf] [External] Spark, Julia, OpenMPI etc. - all in one place
Jim Cownie
jcownie at gmail.com
Tue Oct 13 01:49:22 PDT 2020
>> It just seems to me that things have not really changed in the tooling in the HPC space since 20+ years ago.
It's also worth pointing out that the OpenMP of the year 2000 (OpenMP 2.0) is not the OpenMP of 2020 (OpenMP 5.1), (just as C++20 is not C++98), and, similarly MPI has also advanced in the last twenty years (as has Fortran).
Just because the name is the same does not mean that the specification and its capabilities are the same.
Taking OpenMP, off the top of my head, major changes include all of the support for
Offloading computation to target devices (normally GPUs at present).
Tasking (including task dependencies)
Vectorisation directives
(there are undoubtedly many other changes; heck, in 2000 the standard was 124pp of Fortran + 85pp for CV, whereas the TR for 5.a is 715pp, so there’s a lot more in there!)
“Rip it up and start again” https://www.youtube.com/watch?v=UzPh89tD5pA <https://www.youtube.com/watch?v=UzPh89tD5pA> is not always the best approach, and those of us who were around in the 80s and 90s did know a few things even back then!
-- Jim
James Cownie <jcownie at gmail.com>
Mob: +44 780 637 7146
> On 13 Oct 2020, at 09:21, Peter Kjellström <cap at nsc.liu.se> wrote:
>
> On Mon, 12 Oct 2020 22:04:30 -0400
> Oddo Da <oddodaoddo at gmail.com <mailto:oddodaoddo at gmail.com>> wrote:
>
>> Johann-Tobias,
>>
>> Thank you for the reply.
>>
>> I don't know enough detail about Julia to even be confused (I am
>> learning it now) :-)
>>
>> It just seems to me that things have not really changed in the
>> tooling in the HPC space since 20+ years ago. This is why I thought,
>> well, hoped that something new and more interesting would have come
>> along - like Julia. Being able to express better and at a higher
>> level parallelization or distribution tasks (higher than MPI anyway)
>> would be nice. Spark is nice that way in the data science space but
>> it cannot run in the same space/hardware as traditional HPC
>> approaches, sadly.
>
> Well quite a few things have "come along" but there's soo much
> inertia in C/C++/Fortran with OpenMP (and/)or MPI that new things are
> pretty much invisible if you look at what's run on an everyday basis...
>
> From this perspective (and my vantage point in national academic HPC) it
> seems the only significant change over then last 10-20 years is
> Python use (as more or less complete applications, glue, interactive
> work, ...). But also the scale of parallelism (used to be 10-100 on a
> system, now 10000-100000).
>
> For things that have "come along" but not quite altered the big picture
> (much?) (yet?) (ever?) here are a few:
>
> * Chapel (https://chapel-lang.org/ <https://chapel-lang.org/>)
> * OpenMP and OpenACC for GPU use
> * HPX (https://stellar-group.org/ <https://stellar-group.org/>)
> * Legion (https://legion.stanford.edu/ <https://legion.stanford.edu/>)
> * Mooaaaar_taaaasks(MPI+OpenMP) -> TAMPI+OmpSS2 (https://pm.bsc.es/ <https://pm.bsc.es/>)
>
> /Peter
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org <mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf <https://beowulf.org/cgi-bin/mailman/listinfo/beowulf>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20201013/58edcdf6/attachment-0001.html>
More information about the Beowulf
mailing list