Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Brain Research: A Pathfinder for Future HPC

Dirk Pleiter from the Jülich Supercomputing Centre gave this talk at the NVIDIA GPU Technology Conference. “One of the biggest and most exiting scientific challenge requiring HPC is to decode the human brain. Many of the research topics in this field require scalable compute resources or the use of advance data analytics methods (including deep learning) for processing extreme scale data volumes. GPUs are a key enabling technology and we will thus focus on the opportunities for using these for computing, data analytics and visualization. GPU-accelerated servers based on POWER processors are here of particular interest due to the tight integration of CPU and GPU using NVLink and the enhanced data transport capabilities.”

Jülich Simulates World Record 46 Qubit Quantum Computer

Scientists from the Jülich Supercomputing Centre in Germany have set a new world record–simulating a quantum computer with 46 quantum bits – or qubits – for the first time. For their calculations, the scientists used the Jülich supercomputer JUQUEEN as well as the world’s fastest supercomputer Sunway TaihuLight at China’s National Supercomputing Center in Wuxi.

Video: Atos and ParTec to deploy 12 Petaflop Supercomputer at Jülich

In this video, Hugo Falter from Par-Tec describes the new 12 Petaflop supercomputer coming to the Jülich Supercomputing Centre in Germany. “Modular supercomputing, an idea conceived by Dr. Lippert almost 20 years ago, was realised by JSC and ParTec in the EU-funded research projects DEEP and DEEP-ER together with many partners from research and industry. Since 2010, our experts have been developing the software, which will in future create the union of several modules into a single system.”

Jülich to Build 5 Petaflop Supercomputing Booster with Dell

Today Intel and the Jülich Supercomputing Centre together with ParTec and Dell today announced plans to develop and deploy a next-generation modular supercomputing system. Leveraging the experience and results gained in the EU-funded DEEP and DEEP-ER projects, in which three of the partners have been strongly engaged, the group will develop the necessary mechanisms required to augment JSC’s JURECA cluster with a highly-scalable component named “Booster” and being based on Intel’s Scalable Systems Framework (Intel SSF).

Taming Heterogeneity in HPC – The DEEP-ER take

Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”

Video: DEEP-ER Project Moves Europe Closer to Exascale

In this video from ISC 2016, Estela Suarez from the Jülich Supercomputing Centre provides an update on the DEEP-ER project, which is paving the way towards Exascale computing. “In the predecessor DEEP project, an innovative architecture for heterogeneous HPC systems has been developed based on the combination of a standard HPC Cluster and a tightly connected HPC Booster built of many- core processors. DEEP-ER now evolves this architecture to address two significant Exascale computing challenges: highly scalable and efficient parallel I/O and system resiliency. Co-Design is key to tackle these challenges – through thoroughly integrated development of new hardware and software components, fine-tuned with actual HPC applications in mind.”

EXTOLL Deploys Immersion Cooled Compute Booster at Jülich

Today Extoll, the German HPC innovation company, announced that is has it has successfully implemented its new GreenICE immersion cooling system at the Jülich Supercomputing Centre. As part of the DEEP Dynamical Exascale Entry Platform project, GreenICE was developed to meet the need for increased compute power, density, and energy efficiency.