Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

GE Research Leverages World’s Top Supercomputer to Boost Jet Engine Efficiency

GE Research has been awarded access to the world’s #1-ranked supercomputer to discover new ways to optimize the efficiency of jet engines and power generation equipment. Michal Osusky, the project’s leader from GE Research’s Thermosciences group, says access to the supercomputer and support team at OLCF will greatly accelerate learning insights for turbomachinery design improvements that lead to more efficient jet engines and power generation assets, stating, “We’re able to conduct experiments at unprecedented levels of speed, depth and specificity that allow us to perceive previously unobservable phenomena in how complex industrial systems operate. Through these studies, we hope to innovate new designs that enable us to propel the state of the art in turbomachinery efficiency and performance.”

Podcast: Solving Multiphysics Problems at the Exascale Computing Project

In this Let’s Talk Exascale Podcast, Stuart Slattery and Damien Lebrun-Grandie from ORNL describe how they are readying algorithms for next-generation supercomputers at the Department of Energy. “The mathematical library development portfolio of the Software Technology (ST) research focus area of the ECP provides general tools to implement complex algorithms. These algorithms are designed to scale up for supercomputers so that ECP teams can then use them to accelerate the development and improve the performance of science applications on DOE high-performance computing architectures.”

Podcast: Rewriting NWChem for Exascale

In this Let’s Talk Exascale podcast, researchers from the NWChemEx project team describe how they are readying the popular code for Exascale. The NWChemEx team’s most significant success so far has been to scale coupled-cluster calculations to a much larger number of processors. “In NWChem we had the global arrays as a toolkit to be able to build parallel applications.”

Podcast: A Codebase for Deep Learning Supercomputers to Fight Cancer

In this Let’s Talk Exascale podcast, Gina Tourassi from ORNL describes how the CANDLE project is setting the stage to fight cancer with the power of Exascale computing. “Basically, as we are leveraging supercomputing and artificial intelligence to accelerate cancer research, we are also seeing how we can drive the next generation of supercomputing.”

ORNL Tests Arm-based Wombat Platform with NVIDIA GPUs

Researchers at ORNL are trying out their HPC codes on Wombat, a test bed cluster based on production Marvell ThunderX2 CPUs and NVIDIA V100 GPUs. The small cluster provides a platform for testing NVIDIA’s new CUDA software stack purpose-built for Arm CPU systems. “Eight teams successfully ported their codes to the new system in the days leading up to SC19. In less than 2 weeks, eight codes in a variety of scientific domains were running smoothly on Wombat.”

ORNL Researchers Develop Quantum Chemistry Simulation Benchmark

Researchers at the Department of Energy’s Oak Ridge National Laboratory have developed a quantum chemistry simulation benchmark to evaluate the performance of quantum devices and guide the development of applications for future quantum computers. “This work is a critical step toward a universal benchmark to measure the performance of quantum computers, much like the LINPACK metric is used to judge the fastest classical computers in the world.”

Simulating SKA Telescope’s Massive Dataflow using the Summit Supercomputer

Researchers are using the Summit Supercomputer at ORNL to simulate the massive dataflow of the future SKA telescope. “The SKA simulation on Summit marks the first time radio astronomy data have been processed at such a large scale and proves that scientists have the expertise, software tools, and computing resources that will be necessary to process and understand real data from the SKA.”

The Coming Age of Extreme Heterogeneity in HPC

Jeffrey Vetter from ORNL gave this talk at ATPESC 2019. “In this talk, I’m going to cover some of the high-level trends guiding our industry. Moore’s Law as we know it is definitely ending for either economic or technical reasons by 2025. Our community must aggressively explore emerging technologies now!”

Podcast: UnifyFS Software Project steps up to Exascale

In this Let’s Talk Exascale podcast, Kathryn Mohror LLNL and Sarp Oral of ORNL provide an update ECP’s ExaIO project and UnifyFS. “UnifyFS can provide ECP applications performance-portable I/O across changing storage system architectures, including the upcoming Aurora, Frontier, and El Capitan exascale machines. “It is critically important that we provide this portability so that application developers don’t need to spend their time changing their I/O code for every system.”