Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supercomputing Earthquakes in the Age of Exascale

Tomorrow’s exascale supercomputers will enable researchers to accurately simulate the ground motions of regional earthquakes quickly and in unprecedented detail. “Simulations of high frequency earthquakes are more computationally demanding and will require exascale computers,” said David McCallen, who leads the ECP-supported effort. “Ultimately, we’d like to get to a much larger domain, higher frequency resolution and speed up our simulation time.”

Exascale: The Movie

In this video from HPE, researchers describe how Exascale will advance science and improve the quality of life for all. “Why is the U.S. government throwing down this gauntlet? Many countries are engaged in what has been referred to as a race to exascale. But getting there isn’t just for national bragging rights. Getting to exascale means reaching a new frontier for humanity, and the opportunity to potentially solve humanity’s most pressing problems.”

IDEAS Program Fostering Better Software Development for Exascale

Scalability of scientific applications is a major focus of the Department of Energy’s Exascale Computing Project (ECP) and in that vein, a project known as IDEAS-ECP, or Interoperable Design of Extreme-scale Application Software, is also being scaled up to deliver insight on software development to the research community.

LANL Steps Up to HPC for Materials Program

“Understanding and predicting material performance under extreme environments is a foundational capability at Los Alamos,” said David Teter, Materials Science and Technology division leader at Los Alamos. “We are well suited to apply our extensive materials capabilities and our high-performance computing resources to industrial challenges in extreme environment materials, as this program will better help U.S. industry compete in a global market.”

New HPC for Materials Program to Help American Industry

Earlier this week, U.S. Secretary of Energy Rick Perry announced a new high-performance computing initiative that will help U.S. industry accelerate the development of new or improved materials for use in severe environments. “The High Performance Computing for Materials Program will provide opportunities for our industry partners to access the high-performance computing capabilities and expertise of DOE’s national labs as they work to create and improve technologies that combat extreme conditions,” said Secretary Perry.

Exascale Computing Project Names Doug Kothe as Director

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. “Doug’s credentials in this area and familiarity with every aspect of the ECP make him the ideal person to build on the project’s strong momentum,” said Bill Goldstein, director of Lawrence Livermore National Laboratory and chairman of the ECP Board of Directors, which hired Kothe.”

20 Future HPC Leaders Receive DOE Computational Science Graduate Fellowship

Today the Krell Institute announced that new class of 20 future HPC leaders enrolled at U.S. universities this fall with support from the Department of Energy Computational Science Graduate Fellowship (DOE CSGF). “Established in 1991, the Department of Energy Computational Science Graduate Fellowship provides outstanding benefits and opportunities to students pursuing doctoral degrees in fields that use high-performance computing to solve complex science and engineering problems.”

Oak Ridge Turns to Deep Learning for Big Data Problems

The Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project aims to use deep learning to assist researchers in making sense of massive datasets produced at the world’s most sophisticated scientific facilities. Deep learning is an area of machine learning that uses artificial neural networks to enable self-learning devices and platforms. The team, led by ORNL’s Thomas Potok, includes Robert Patton, Chris Symons, Steven Young and Catherine Schuman.

DOE Awards 1 Billion Hours of Supercomputer Time for Research

The DOE has awarded 1 Billion CPU hours of compute time on Oak Ridge supercomputers to a set important research projects vital to our nation’s future. ALCC allocations for 2017 continue in the tradition of innovation and discovery with projects awards ranging from 2 million to 300 million processor hours.

DOE Labs Adopt Asetek Liquid Cooling

Today Asetek announced two incremental orders from Penguin Computing, an established data center OEM. The orders are for Asetek’s RackCDU D2C™ (Direct-to-Chip) liquid cooling solution and will enable increased computing power for two currently undisclosed HPC sites at U.S. Department of Energy (DOE) National Laboratories.