Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ORNL Taps D-Wave for Exascale Computing Project

Today Oak Ridge National Laboratory (ORNL) announced they’re bringing on D-Wave to use quantum computing as an accelerator for the Exascale Computing Project. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Podcast: A Retrospective on Great Science and the Stampede Supercomputer

TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine. “In 2017, the Stampede supercomputer, funded by the NSF, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.”

Supercomputers turn the clock back on Storms with “Hindcasting”

Researchers are using supercomputers at LBNL to determine how global climate change has affected the severity of storms and resultant flooding. “The group used the publicly available model, which can be used to forecast future weather, to “hindcast” the conditions that led to the Sept. 9-16, 2013 flooding around Boulder, Colorado.”

Intel’s Xeon Scalable Processors Provide Cooling Challenges for HPC

Unless you reduce node and rack density, the wattages of today’s high-poweredCPUs and GPUs are simply no longer addressable with air cooling alone. Asetek explores how new processors, such as Intel’s Xeon Scalable processors, often call for more than just air cooling. “The largest Xeon Phi direct-to-chip cooled system today is Oakforest-PACS system in Japan. The system is made up of 8,208 computational nodes using Asetek Direct-to-Chip liquid cooled Intel Xeon Phi high performance processors with Knights Landing architecture. It is the highest performing system in Japan and #7 on the Top500.”

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions. “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos ASC program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

Survey: Training and Support #1 Concern for the HPC Community

Initial results of the Scientific Computing World (SCW) HPC readership survey have shown training and support for HPC resources are the number one concern for both those that operate and manage HPC facilities and researchers using HPC resources. “Several themes have emerged as a priority to both HPC managers and users/researchers. Respondents cite that training and support are essential parameters compared to performance, hardware or the availability of HPC resources.”

Red Hat Ceph Storage Powers Research at the University of Alabama at Birmingham

On June 6, Red Hat announced that the University of Alabama at Birmingham (UAB) is using Red Hat Ceph Storage to support the growing needs of its research community. UAB selected Red Hat Ceph Storage because it offers researchers a flexible platform that can accommodate the vast amounts of data necessary to support future innovation and discovery. “UAB is a leader in computational research, with more than $500 million in annual research expenditures in areas including engineering, statistical genetics, genomics and next-generation gene sequencing,” said Curtis A. Carver Jr., VP and CIO at UAB. “Researchers and students aggregate, analyze, and store massive amounts of data, which is used to support groundbreaking medical discoveries from new cancer biomarkers to state-of-the-art magnetic resonance imaging techniques.”

Supercomputing the Signature of Chaos in Ultracold Reactions

Researchers have performed the first ever quantum-mechanical simulation of the benchmark ultracold chemical reaction between potassium-rubidium (KRb) and a potassium atom, opening the door to new controlled chemistry experiments and quantum control of chemical reactions that could spark advances in quantum computing and sensing technologies. The research by a multi-institutional team simulated the ultracold chemical reaction, with results that had not been revealed in experiments. “We found that the overall reactivity is largely insensitive to the underlying chaotic dynamics of the system,” said Brian Kendrick of Los Alamos National Laboratory’s Theoretical Division, “This observation has important implications for the development of controlled chemistry and for the technological applications of ultracold molecules from precision measurement to quantum computing.”

Brazil-Based AMT to Resell Bright Computing Software

Today Bright Computing announced a reseller agreement with AMT. “We are very impressed with Bright’s technology and we believe it will make a huge difference to our customers’ HPC environments,” said Ricardo Lugão, HPC Director at AMT. “With Bright, the management of an HPC cluster becomes very straightforward, empowering end users to administer their workloads, rather than relying on HPC experts.”

ANSYS Scales to 200K Cores on Shaheen II Supercomputer

Today ANSYS, Saudi Aramco, and KAUST announced a new supercomputing milestone by scaling ANSYS Fluent to nearly 200,000 processor cores – enabling organizations to make critical and cost-effective decisions faster and increase the overall efficiency of oil and gas production facilities. This supercomputing record represents a more than 5x increase over the record set just three years ago, when Fluent first reached the 36,000-core scaling milestone. “Today’s regulatory requirements and market expectations mean that manufacturers must develop products that are cleaner, safer, more efficient and more reliable,” said Wim Slagter, director of HPC and cloud alliances at ANSYS. “To reach such targets, designers and engineers must understand product performance with higher accuracy than ever before – especially for separation technologies, where an improved separation performance can immediately increase the efficiency and profitability of an oil field. The supercomputing collaboration between ANSYS, Saudi Aramco and KSL enabled enhanced insight in complex gas, water and crude-oil flows inside a separation vessel, which include liquid free-surface, phase mixing and droplets settling phenomena.”