High Performance Computing : time to go faster?

HPC
Written by Salli Moustafa, on 06 February 2018
We have been noticing lately an important surge of interest for the development of high-performance computing (HPC) applications, as attested by the number of young companies into this field.

Marie Quantier, Strains, Quantmetry, Nexiogroup, Numtech, Cybeletech, Hydrocean, Datavortex...

Several factors help to explain this situation:

  1. The need to improve the control of our environment.

    With the precautionary principle and its related cost, real experimentations either in physics, biology or medicine are hard to perform. As a result, complex simulations of living things are realized in order to be improve the control of our environement.

  2. The improvement of numerical simulations.

    The improvement of numerical simulations and the progress on computing architectures make the exploration of new horizons possible. Ones, we could have neved imagined only a few years back. L’Oréal’s hair simulation is one example.

  3. The digitalization of our environment.

    The digitalization leads to a significative and constant increase of volume of data to proceed. It requires both an important computing power and the conception of efficient data analysis search engines.

Although, work in the HPC field was mainly focusing on R&D applications, High Performance Computing

Although it was originally oriented towards applications in research and development, HPC has been democratized in the industry field these last years, especially thanks to the easy access to inexpensive computation resources. Nevertheless, whenever one takes a closer look at these solutions, to only have access to these resources does not guarantee to systematically obtain better performances for high performance computing applications.

The introduction of the multicore processor in the 2000’s was a major turnpoint. Indeed, since then we have mainly been witnessing processors’ deep micro-architectural transformations. The second consequence is the apparition of new types of computing microchips (GPU, MIC, FPGA). But these evolutions also come along new programming paradigms that developers have to take into account. The time when, you just had to wait for the new generation of your favorite processor; to benefit from performance gains for free, is over; as the slogan  » “The free lunch is over” reminds us. We have to adopt to new type of code improvements methods.
In fact, A method including both the software and hardware sides.

When used properly, these code improvements methods bring significant benefits.

For instance, let’s compare the time needed to simulate the complete heart of a nuclear reactor. You can find the details in one of my articles. On one side, we have PENTRAN, a universal solver for neutron transportation developed at the Pennsylvania State University. PENTRAN solves this problem in 4752 minutes with the use of 3468 cores from a supercomputer containing 289 nodes. On the other side, DOMINO, another universal solver for neutron transportation developed by EDF R&D. Domino was created and adapted to PWR hearts. Domino solves the problem in only 67 minutes, using 32 cores from 1 cluster node with similar performance.

The results clearly demonstrate how important it is to involve the specific expertise of each fild in the development process. This expertise is what gives you a performing software. In the end, the calculation executed by DOMINO requires 100 less times computing resources than the one done with PENTRAN. Note that PENTRAN was not specifically conceived to treat problems related to simulation of the heart of pressurized water reactors. On the other hand, DOMINOS had a « Feature team » made of HPC experts and mathematicians. This team conceived a specific solver specialized in PWR simulations and suited to modern architectures.