Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


InfiniBand Powers World’s Fastest Supercomputer

Today the InfiniBand Trade Association (IBTA) announced that the latest TOP500 List results that report the world’s new fastest supercomputer, Oak Ridge National Laboratory’s Summit system, is accelerated by InfiniBand EDR. InfiniBand now powers the top three and four of the top five systems. The latest rankings underscore InfiniBand’s continued position as the interconnect of choice for the industry’s most demanding high performance computing (HPC) platforms. “As the makeup of the world’s fastest supercomputers evolve to include more non-HPC systems such as cloud and hyperscale, the IBTA remains confident in the InfiniBand Architecture’s flexibility to support the increasing variety of demanding deployments,” said Bill Lee, IBTA Marketing Working Group Co-Chair. “As evident in the latest TOP500 List, the reinforced position of InfiniBand among the most powerful HPC systems and growing prominence of RoCE-capable non-HPC platforms demonstrate the technology’s unparalleled performance capabilities across a diverse set of applications.”

OpenACC Helps Scientists Port their code at the Center for Application Readiness (CARR)

In this video, Jack Wells from the Oak Ridge Leadership Computing Facility and Duncan Poole from NVIDIA describe how OpenACC enabled them to port their codes to the new Summit supercomputer. “In preparation for next-generation supercomputer Summit, the Oak Ridge Leadership Computing Facility (OLCF) selected 13 partnership projects into its Center for Accelerated Application Readiness (CAAR) program. A collaborative effort of application development teams and staff from the OLCF Scientific Computing group, CAAR is focused on redesigning, porting, and optimizing application codes for Summit’s hybrid CPU–GPU architecture.”

Univa Deploys Million-core Grid Engine Cluster on AWS

To demonstrate the unique ability to run very large enterprise HPC clusters and workloads, Univa leveraged AWS to deploy 1,015,022 cores in a single Univa Grid Engine cluster to showcase the advantages of running large-scale electronic design automation (EDA) workloads in the cloud. The cluster was built in approximately 2.5 hours using Navops Launch automation and comprised more than 55,000 AWS instances in 3 availability zones, 16 different instance types and leveraged AWS Spot Fleet technology to maximize the rate at which Amazon EC2 hosts were launched while enabling capacity and costs to be managed according to policy.

Let’s Talk Exascale: Optimizing I/O at the ADIOS Project

In this episode of Let’s Talk Exascale, researchers from the ADIOS project describe how they are optimizing I/O on exascale architectures and making the code easily maintainable, sustainable, and extensible, while ensuring its performance and scalability. “The Adaptable I/O System (ADIOS) project in the ECP supports exascale applications by addressing their data management and in situ analysis needs.”

Video: Researchers Step Up with the New Summit Supercomputer

“The biggest problems in science require supercomputers of unprecedented capability. That’s why the ORNL launched Summit, a system 8 times more powerful than their previous top-ranked system, Titan. Summit is providing scientists with incredible computing power to solve challenges in energy, artificial intelligence, human health, and other research areas, that were simply out of reach until now. These discoveries will help shape our understanding of the universe, bolster US economic competitiveness, and contribute to a better future.”

Video: Announcing Summit – World’s Fastest Supercomputer with 200 Petaflops of Performance

Today Energy Secretary Rick Perry unveiled Summit, the world’s most powerful supercomputer. Powered by IBM POWER9 processors, 27,648 NVIDIA GPUs, and Mellanox InfiniBand, the Summit supercomputer is also the first Exaop AI system on the planet. “This massive machine, powered by 27,648 of our Volta GPUs, can perform more than three exaops, or three billion billion calculations per second,” writes Ian Buck on the NVIDIA blog. “That’s more than 100 times faster than Titan, previously the fastest U.S. supercomputer, completed just five years ago. And 95 percent of that computing power comes from GPUs.”

Case Study: Supercomputing Natural Gas Turbine Generators for Huge Boosts in Efficiency

Hyperion Research has published a new case study on how General Electric engineers were able to nearly double the efficiency of gas turbines with the help of supercomputing simulation. “With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Understanding Behaviors in the Extreme Environment of Natural Gas Turbine Generators

“With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%.”

Radio Free HPC Looks at the New Coral-2 RFP for Exascale Computers

In this podcast, the Radio Free HPC team looks at the new Department of Energy’s RFP for Exascale Computers. “As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”