Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Ramping up for Exascale at the National Labs

In this video from the Exascale Computing Project, Dave Montoya from LANL describes the continuous software integration effort at DOE facilities where exascale computers will be located sometime in the next 3-4 years. “A key aspect of the Exascale Computing Project’s continuous integration activities is ensuring that the software in development for exascale can efficiently be deployed at the facilities and that it properly blends with the facilities’ many software components. As is commonly understood in the realm of high-performance computing, integration is very challenging: both the hardware and software are complex, with a huge amount of dependencies, and creating the associated essential healthy software ecosystem requires abundant testing.”

Video: AMD Steps up with renewed focus on High Performance Computing

In this video from CES 2019, AMD President and CEO Dr. Lisa Su describes how the new AMD EPYC processors are changing the game for High Performance Computing. “This is an incredible time to be in technology as the industry pushes the envelope on high-performance computing to solve the biggest challenges we face together,” said Su. “At AMD, we made big bets several years ago to accelerate the pace of innovation for high-performance computing, and 2019 will be an inflection point for the industry as we bring these new products to market.”

Video: An Update on the European Processing Initiative

In this talk from the 2018 HiPEAC event, Philippe Notton from Atos describes how the European Processor Initiative will help Europe achieve sovereignty in chips for advanced computing. “We expect to achieve unprecedented levels of performance at very low power, and EPI’s HPC and automotive industrial partners are already considering the EPI platform for their product roadmaps.”

R Systems brings HPC Cloud to SUPERNAP in Las Vegas

Today R Systems announced the expansion of its data center and infrastructure capabilities HPC managed services into Switch’s Core Campus in Las Vegas. “Including Switch data centers as part of the offering for R Systems enables us to continue growing with our customers. Since 2005 R Systems has focused exclusively on HPC, which has enabled us to build expertise about the applications, computing, networking and storage technologies that are unique to HPC,” said R Systems Principal Brian Kucic.

Michael Feldman Joins The Next Platform as Senior Editor

Veteran HPC journalist Michael Feldman has departed TOP500.org to join The Next Platform as Senior Editor. Led by co-founders Nicole Hemsoth and Timothy Prickett Morgan, offers in-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. “My new role also reunites me with some old friends, who just happen to be some of the best writers in the business,” says Feldman. “Not surprisingly, my focus will be on HPC and all things related. These days, that covers a lot of territory since in many ways HPC has provided a useful model for other subject matter that is under the purview of The Next Platform, namely hyperscale, cloud, and high-end enterprise computing.”

Video: Flying through the Universe with Supercomputing Power

In this video from SC18, Mike Bernhardt from the Exascale Computing Project talked with Salman Habib of Argonne National Laboratory about cosmological computer modeling and simulation. Habib explained that the ExaSky project is focused on developing a caliber of simulation that will use the coming exascale systems at maximal power. Clearly, there will be different types of exascale machines,” he said, “and so they [DOE] want a simulation code that can use not just one type of computer, but multiple types, and with equal efficiency.”

Job of the Week: Performance Engineer at Oak Ridge National Lab

Oak Ridge National Lab in Tennessee is seeking a Performance Engineer in our Job of the Week. “We are seeking to build a team of Performance Engineers who will serve as liaisons between the National Center for Computational Sciences (NCCS) and the users of the NCCS leadership computing resources, particularly Exascale Computing Project application development teams and will collaborate with the ECP application teams as they ready their software applications for the OLCF Exascale computer Frontier. These positions reside in the Scientific Computing Group in the National Center for Computational Sciences at Oak Ridge National Laboratory.

Machine Learning Award Powers Engine Design at Argonne

Over at Argonne, Jared Sagoff writes that automotive manufacturers are leveraging the power of DOE supercomputers to simulate the combustion engines of the future. “As part of a partnership between the Argonne, Convergent Science, and Parallel Works, engine modelers are beginning to use machine learning algorithms and artificial intelligence to optimize their simulations. Now, this alliance recently received a Technology Commercialization Fund award from the DOE to complete this important project.”

Video: Fusion Research on the Summit Supercomputer

In this video, C.S. Chang from the Princeton Plasma Physics Laboratory describes how his team is using the GPU-powered Summit supercomputer to simulate and predict plasma behavior for the next fusion reactor. “By using Summit, Chang’s team expects its highly scalable XGC code, a first-principles code that models the reactor and its magnetically confined plasma, could be simulated 10 times faster than current supercomputers allow. Such a speedup would give researchers an opportunity to model more complicated plasma edge phenomena, such as plasma turbulence and particle interactions with the reactor wall, at finer scales, leading to insights that could help ITER plan operations more effectively.”

NERSC: Sierra Snowpack Could Drop Significantly By End of Century

A future warmer world will almost certainly feature a decline in fresh water from the Sierra Nevada mountain snowpack. Now a new study by Berkeley Lab shows how the headwater regions of California’s 10 major reservoirs, representing nearly half of the state’s surface storage, found they could see on average a 79 percent drop in peak snowpack water volume by 2100. “What’s more, the study found that peak timing, which has historically been April 1, could move up by as much as four weeks, meaning snow will melt earlier, thus increasing the time lag between when water is available and when it is most in demand.”