Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supercomputing Earthquakes in the Age of Exascale

Tomorrow’s exascale supercomputers will enable researchers to accurately simulate the ground motions of regional earthquakes quickly and in unprecedented detail. “Simulations of high frequency earthquakes are more computationally demanding and will require exascale computers,” said David McCallen, who leads the ECP-supported effort. “Ultimately, we’d like to get to a much larger domain, higher frequency resolution and speed up our simulation time.”

HPC4Mfg Program Seeks New Projects

The High Performance Computing for Manufacturing (HPC4Mfg) program in the Energy Department’s Advanced Manufacturing Office (AMO) announced today their intent to issue their fifth solicitation in January 2018 to fund projects that allow manufacturers to use high-performance computing resources at the Department of Energy’s national laboratories to tackle major manufacturing challenges.

Video: How MVAPICH & MPI Power Scientific Research

Adam Moody from LLNL presented this talk at the MVAPICH User Group. “High-performance computing is being applied to solve the world’s most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand.”

SC17 Exhibition Setting Records

Even though SC17 is still more than two months away, this year’s Exhibition is already setting records.

We are excited to report that we have already smashed the record for both net square feet of exhibit space sold as well as the total number of exhibitors,” says Bronis R. de Supinski, SC17 Exhibits Chair from Lawrence Livermore National Laboratory. “Whether industry or research – if you have anything to do with high performance computing, you need to have a presence at SC17. The SC exhibition floor is an exciting place to discover the latest innovations and make positive career-impacting connections.”

Job of the Week: Research Scientist at LLNL

Lawrence Livermore National Laboratory is seeking a Research Scientist in our Job of the Week. “For more than 60 years, the Lawrence Livermore National Laboratory (LLNL) has applied science and technology to make the world a safer place. We have an opening for a research scientist with expertise in numerical or computational physics, astrophysics, or astronomy. The research will span topics in astronomical survey analysis, cosmology, optics modeling, and national security applications. This position is in the Physics Division/Applied Physics Section in the Optical Sciences group.”

HPC4Mfg Industry Day Comes to San Diego March 2-3

HPC4Mfg will host their first annual High Performance Computing for Manufacturing Industry Engagement Day on March 2-3 in San Diego. With a theme of “Spurring Innovation in U.S. Manufacturing Through Advanced Computing,” the conference will bring together representatives from U.S. manufacturing, national laboratories, universities, and consortiums to discuss the recent advancements in manufacturing realized through the application of HPC and how leveraging HPC expertise through public-private partnerships has lowered the risk of adoption.

Video: Livermore HPC Takes Aim at Cancer

In this video, Jonathan Allen from LLNL describes how Lawrence Livermore’s supercomputers are playing a crucial role in advancing cancer research and treatment. “A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. Announced in late 2015, the effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets.”

Reflecting on the Goal and Baseline for Exascale Computing

Thomas Schulthess from CSCS gave this Invited Talk at SC16. “Experience with today’s platforms show that there can be an order of magnitude difference in performance within a given class of numerical methods – depending only on choice of architecture and implementation. This bears the questions on what our baseline is, over which the performance improvements of Exascale systems will be measured. Furthermore, how close will these Exascale systems bring us to deliver on application goals, such as kilometer scale global climate simulations or high-throughput quantum simulations for materials design? We will discuss specific examples from meteorology and materials science.”

Podcast: LLNL’s Lori Diachin Reviews the SC16 Technical Program

“I think the most important thing I’d like people to know about SC16 is that it is a great venue for bringing the entire community together, having these conversations about what we’re doing now, what the environment looks like now and what it’ll look like in five, ten fifteen years. The fact that so many people come to this conference allows you to really see a lot of diversity in the technologies being pursued, in the kinds of applications that are being pursued – from both the U.S. environment and also the international environment. I think that’s the most exciting thing that I think about when I think about supercomputing.”

RAID Inc. Steps up with ZFS on Lustre at SC16

In this video from SC16, Brad Merchant from RAID Inc. describes the company’s new Lustre ZFS Building Block. “RAID Inc. offers a suite of building block product families that can be purchased individually or in conjunction with other RAID products to solve customer’s needs in the most demanding data-storage environments. Each product is customized to address customer’s individual requirements of performance, reliability, scalability and price. Each product is put through extensive testing and a burn-in/staging process which ensures customers will receive a solution designed to function as specified in their unique environment.”