Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ExaLearn Project to bring Machine Learning to Exascale

As supercomputers become ever more capable in their march toward exascale levels of performance, scientists can run increasingly detailed and accurate simulations to study problems ranging from cleaner combustion to the nature of the universe. Enter ExaLearn, a new machine learning project supported by DOE’s Exascale Computing Project (ECP), aims to develop new tools to help scientists overcome this challenge by applying machine learning to very large experimental datasets and simulations. 

IonQ posts benchmarks for quantum computer

Today quantum computing startup IonQ released the results of two rigorous real-world tests that show that its quantum computer can solve significantly more complex problems with greater accuracy than results published for any other quantum computer. “The real test of any computer is what can it do in a real-world setting. We challenged our machine with tough versions of two well-known algorithms that demonstrate the advantages of quantum computing over conventional devices. The IonQ quantum computer proved it could handle them. Practical benchmarks like these are what we need to see throughout the industry.”

Job of the Week: HPC Specialist in Software Development at DKRZ

The German Climate Computing Centre (DKRZ) is seeking and HPC and Specialist in Software Development in our Job of the Week. “DKRZ is involved in numerous national and international projects in the field of high-performance computing for climate and weather research. In addition to the direct user support, depending on your interest and ability, you will also be able to participate in this development work.”

Video: Cray Announces First Exascale System

In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.”

NERSC taps NVIDIA compiler team for Perlmutter Supercomputer

NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer. “We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,” said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals.”

PASC19 Preview: Brueckner and Dr. Curfman-McInnes to Moderate Exascale Panel Discussion

Today the PASC19 Conference announced that Dr. Lois Curfman McInnes from Argonne and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond. “In this session, Lois Curfman McInnes from Argonne National Laboratory and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond – mixing “big picture” and technical discussions. McInnes will bring her unique perspective on emerging Exascale software ecosystems to the table, while Brueckner will illustrate the benefits of Exascale to world-wide audiences.”

How Mellanox SHARP technology speeds Ai workloads

Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology improves upon the performance of MPI operations by offloading collective operations from the CPU to the switch network, and by eliminating the need to send data multiple times between endpoints. This innovative approach decreases the amount of data traversing the network as aggregation nodes are reached, and dramatically reduces the MPI operations time. Implementing collective communication algorithms in the network also has additional benefits, such as freeing up valuable CPU resources for computation rather than using them to process communication.”

UberCloud Publishes Compendium Of CFD Case Studies

If you are considering moving some of your HPC workload to the Cloud, nothing leads the way like a good set of case studies in your scientific domain. To this end, our good friends at the UberCloud have published their Compendium Of Case Studies In Computational Fluid Dynamics. The document includes 36 CFD case studies summarizing HPC Cloud projects that the UberCloud has performed together with the engineering community over the last six years.

ISC 2019 Keynote to focus on Algorithms of Life

Today the ISC 2019 conference announced that their keynote will be delivered by Professor Ivo Sbalzarini, who will speak to an audience of 3500 attendees about the pivotal role high performance computing plays in the field of systems biology. Under the title, The Algorithms of Life – Scientific Computing for Systems Biology, Sbalzarini will discuss how HPC is being used as a tool for scientific investigation and for hypothesis testing, as well as a more fundamental way to think about problems in systems biology.

OSS Introduces World’s First PCIe Gen 4 Backplane at GTC

Today One Stop Systems introduced the world’s first PCIe Gen 4 backplane. “Delivering the high performance required by edge applications necessitates PCIe interconnectivity traveling on the fast data highway between high-speed processors, NVMe storage and compute accelerators using GPUs or application specific FPGAs,” continued Cooper. “‘AI on the Fly’ applications naturally demand this capability, like the government mobile shelter application we announced earlier this year.”