Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

ARM goes Big: HPE Builds Petaflop Supercomputer for Sandia

Today HPE announced plans to deliver the world’s largest Arm supercomputer. As part of the Vanguard program, Astra, the new Arm-based system, will be used by the NNSA to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science. “By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE.

Advanced Computing: HPC and RDS at University of Bristol

Simon Burbidge from the University of Bristol gave this talk at the HPC User Forum. “Our research focuses on the application of heterogeneous and many-core computing to solve large-scale scientific problems. Related research problems we are addressing include: performance portability across many-core devices; automatic optimization of many-core codes; communication-avoiding algorithms for massive scale systems; and fault tolerance software techniques for resiliency at scale.”

Cavium ThunderX2 Processor goes GA for HPC and Beyond

Today Cavium announced the General Availability of ThunderX2, Cavium’s second generation of Armv8-A SoC processors. “Integrating ThunderX2 into the HPE Apollo 70 Servers is another example of HPE’s leadership in driving innovation and superior technical solutions into the HPC server market. The ThunderX2 processor provides excellent compute and memory performance that is critical for our HPE Apollo 70 customers and the applications they depend on.”

JUWELS Supercomputer in Germany to be based on Modular Supercomputing

“The supercomputer JUQUEEN, the one-time reigning power in Europe’s high-performance computing industry, is ceding its place to its successor, the Jülich Wizard for European Leadership Science. Called JUWELS for short, the supercomputer is the culmination of the joint efforts of more than 16 European partners in the EU-funded DEEP projects since 2011. Once completed, JUWELS will consist of three fully integrated modules able to carry out demanding simulations and scientific tasks.”

HPE Teams with University of Bristol for ARM-based HPC

Today the University of Bristol announced an initiative to accelerate the adoption of
ARM-based supercomputers in the UK. “HPE is excited to work with Arm, SUSE, and other key partners to offer the HPC community a fresh alternative for high performance computing which we believe will stimulate the industry to develop increasingly performant and efficient supercomputing solutions. By investing in this deployment through the Catalyst UK programme, HPE and our partners will drive both digital transformation and sustainable economic growth through new innovation and scientific discovery.”

Asperitas and Boston Collaborate on Immersed Cooling for Datacenters

Today Dutch cleantech company Asperitas announced it is partnering with Boston Ltd on Immersed Computing technologies. “Our partnership with Asperitas ushers in an exciting new chapter for our business and for our customers delivering radically improved datacenter cooling solutions coupled with intelligent energy recovery and reduced operational costs.”

Podcast: Power and Peformance Optimization at Exascale

In this Let’s Talk Exascale podcast, Tapasya Patki of Lawrence Livermore National Laboratory dicusses ECP’s Power Steering Project. “Efficiently utilizing procured power and optimizing the performance of scientific applications at exascale under power and energy constraints are challenging for several reasons. These include the dynamic behavior of applications, processor manufacturing variability, and increasing heterogeneity of node-level components.”

Video: How EuroEXA is Paving the Way to Exascale

In this video, Georgios Goumas the University of Athens describes how the EuroEXA project is working to develop the exascale computers in Europe. “To accomplish this, the project takes a holistic approach innovating both across the technology and the application/system software pillars. EuroEXA proposes a balanced architecture for compute and data-intensive applications, that builds on top of cost-efficient, modular-integration enabled by novel inter-die links, utilises a novel processing unit and embraces FPGA acceleration for computational, networking and storage operations.”

European LEGaTO Project aims to Develop Software for Energy Efficient Computing

A new European project aims to overcome the energy efficiency challenges of heterogeneous computing architectures by developing a new software stack. “Moore´s Law is slowing down, and as consequence hardware is becoming more heterogeneous. In the LEGaTO project, we will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. Our aim is one order of magnitude energy savings from the edge to the converged cloud/high-performance computing.”

David Bader from Georgia Tech Joins PASC18 Speaker Lineup

Today PASC18 announced that this year’s Public Lecture will be held by David Bader from Georgia Tech. Dr. Bader will speak on Massive-Scale Analytics Applied to Real-World Problems. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams. This talk will discuss the opportunities and challenges in massive data-intensive computing for applications in social sciences, physical sciences, and engineering.”