Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

ClusterVision White Paper Looks at HPC Performance Impact of Spectre and Meltdown

While various kernel patches are already out for the Spectre and Meltdown, the performance impact of these patches on HPC performance has been a big question. Now ClusterVision has published a timely white paper on this important topic. “These vulnerabilities have only been discovered recently, so information is still developing. Therefore, this document should not be interpreted as a complete overview of the situation but as an informative view of the potential impact on HPC.”

Podcast: The Exascale Data and Visualization project at LANL

In this episode of Let’s Talk Exascale, Scott Gibson discusses the ECP Data and Visualization project with Jim Ahrens from Los Alamos National Lab. Ahrens is principal investigator for the Data and Visualization project, which is responsible for the storage and visualization aspects of the ECP and for helping its researchers understand, store, and curate scientific data.

Video: Weather and Climate Modeling at Convection-Resolving Resolution

David Leutwyler from ETH Zurich gave this talk at the 2017 Chaos Communication Congress. “The representation of thunderstorms (deep convection) and rain showers in climate models represents a major challenge, as this process is usually approximated with semi-empirical parameterizations due to the lack of appropriate computational resolution. Climate simulations using kilometer-scale horizontal resolution allow explicitly resolving deep convection and thus allow for an improved representation of the water cycle. We present a set of such simulations covering Europe and global computational domains.”

Video: Thomas Zacharia from ORNL Testifies at House Hearing on the Need for Supercomputing

In this video, Thomas Zacharia from ORNL testifies before the House Energy and Commerce hearing on DOE Modernization. “At the OLCF, we are deploying a system that may well be the world’s most powerful supercomputer when it begins operating later this year. Summit will be at least five times as powerful as Titan. It will also be an exceptional resource for deep learning, with the potential to address challenging data analytics problems in a number of scientific domains. Summit is among the products of CORAL, the Collaboration of Oak Ridge, Argonne, and Livermore.”

Job of the Week: System Administrator at D.E. Shaw Research

D.E. Shaw Research in NYC is seeking a System Administrator for Servers, Clusters and Supercomputers for Computational Biochemistry in our Job of the Week. “Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Positions are available at our New York City offices, and at our data centers in Durham, NC and Endicott, NY.”

Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Positions are available at our New York City offices, and at our data centers in Durham, NC and Endicott, NY.

Registration Opens for LUG 2018 in April

Registration is now open for LUG 2018. Sponsored by Globus. the 2018 Lustre User Group (LUG) conference will be held April 23-26, 2018 at Argonne National Laboratory. “As always, LUG provides technical sessions on the latest Lustre developments and best practices, while providing opportunities to share information, network, and collaborate with your peers.”

Fighting Cancer with Deep Learning at Scale with the CANDLE Project

In this episode of Let’s Talk Exascale, Mike Bernhardt discusses the CANDLE project for cancer research with Rick Stevens from Argonne National Lab. The CANcer Distributed Learning Environment (CANDLE) is an ECP application development project targeting new computational methods for cancer treatment with precision medicine.

HPC4Mfg Program Selects New Industry Projects

Today the DOE announced $1.87 million for seven projects to advance innovation in U.S. manufacturing through HPC. “The HPC4Mfg program leverages world-class technical expertise with high performance computing to tackle manufacturing challenges uniquely solved by computer modeling. By applying modeling, simulation, and data analytics to key manufacturing problems, the program can aid in decision-making, optimize processes and design, improve quality, predict performance and failure, reduce or eliminate testing, and shorten the time to market.”

Call for Papers: International Workshop on Accelerators and Hybrid Exascale Systems

The eight annual  International Workshop on Accelerators and Hybrid Exascale Systems (AsHES) has issued its Call for Papers. Held in conjunction with the 32nd IEEE International Parallel and Distributed Processing Symposium, the AsHES Workshop takes place May 23 in Vancouver, Canada. “This workshop focuses on understanding the implications of accelerators and heterogeneous designs on the hardware systems, porting applications, performing compiler optimizations, and developing programming environments for current and emerging systems. It seeks to ground accelerator research through studies of application kernels or whole applications on such systems, as well as tools and libraries that improve the performance and productivity of applications on these systems.”

Using the Titan Supercomputer to Accelerate Deep Learning Networks

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the laboratory’s Titan supercomputer.