Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Cerebras 1.2 Trillion Chip Integrated with LLNL’s Lassen System for AI Research  

Lawrence Livermore National Laboratory (LLNL) and AI company Cerebras Systems today announced the integration of the 1.2-trillion Cerebras’ Wafer Scale Engine (WSE) chip into the National Nuclear Security Administration’s (NNSA) 23-petaflop Lassen supercomputer. The pairing of Lassen’s simulation capability with Cerebras’ machine learning compute system, along with the CS-1 accelerator system that houses the chip, […]

Never Enough Bandwidth: Optical I/O Consortium Formed to Set Interconnect Standards

More than 20 companies have joined an industry consortium to establish specifications for multi-wavelength integrated optics – the emerging interconnect technology whose advocates say is critical to next-generation HPC and AI. Announced today, the CW-WDM MSA (Continuous-Wave Wavelength Division Multiplexing Multi-Source Agreement) Group, wants to build an ecosystem to work on common standards and interoperability for dense laser light sources, which in turn will enable broad adoption of optical I/O.

The Incorporation of Machine Learning into Scientific Simulations at LLNL

Katie Lewis from Lawrence Livermore National Laboratory gave this talk at the Stanford HPC Conference. “Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness.”

Accelerating vaccine research for COVID-19 with HPC and AI

In this special guest feature, Peter Ungaro from HPE writes that HPC is playing a leading role in our fight against COVID-19 to support the urgent need to find a vaccine that will save lives and reduce suffering worldwide. “At HPE, we are committed to advancing the way we live and work. As a world leader in HPC and AI, we recognize the impact we can make by applying modeling, simulation, machine learning and analytics capabilities to data to accelerate insights and discoveries that were never before possible.”

Podcast: Spack Helps Automate Deployment of Supercomputer Software

In this Let’s Talk Exascale podcast, Todd Gamblin from LLNL describes how the Spack flexible package manager helps automate the deployment of software on supercomputer systems. “After many hours building software on Lawrence Livermore’s supercomputers, in 2013 Todd Gamblin created the first prototype of a package manager he named Spack (Supercomputer PACKage manager). The tool caught on, and development became a grassroots effort as colleagues began to use the tool.”

AMD and Penguin Computing Upgrade Corona Supercomputer to fight COVID-19

Today Lawrence Livermore National Laboratory, Penguin Computing, and AMD announced an agreement to upgrade the Lab’s unclassified Corona HPC cluster with AMD Instinct accelerators, expected to nearly double the peak performance of the machine. The system will be used by the COVID-19 HPC Consortium, a nationwide public-private partnership that is providing free computing time and resources to scientists around the country engaged in the fight against the coronavirus.

Podcast: ZFP Project looks to Reduce Memory Footprint and Data Movement on Exascale Systems

In this Let’s Talk Exascale podcast, Peter Lindstrom from Lawrence Livermore National Laboratory describes how the ZFP project will help reduce the memory footrprint and data movement in Exascale systems. “To perfom those computations, we oftentimes need random access to individual array elements,” Lindstrom said. “Doing that, coupled with data compression, is extremely challenging.”

Video: What Does it Take to Reach 2 Exaflops?

In this video, Addison Snell from Intersect360 Research moderates a panel discussion on the El Capitan supercomputer. With a peak performance of over 2 Exaflops, El Capitan will be roughly 10x faster than today’s fastest supercomputer and more powerful than the current Top 200 systems — combined! “Watch this webcast to learn from our panel of experts about the National Nuclear Security Administration’s requirements and how the Exascale Computing Project helped drive the hardware, software, and collaboration needed to achieve this milestone.”

Podcast: A Look inside the El Capitan Supercomputer coming to LLNL

In this podcast, the Radio Free HPC team looks at some of the more interesting configuration aspects of the pending El Capitan exascale supercomputer coming to LLNL in 2023. “Dan talks about the briefing he received on the new Lawrence Livermore El Capitan system to be built by HPE/Cray. This new $600 million system will be fueled by the AMD Genoa processor coupled with AMD’s Instinct GPUs. Performance should come in at TWO 64-bit exaflops peak, which is very, very sporty.”

World’s Largest Spectra TFinity Tape Library installed at LLNL

Lawrence Livermore National Laboratory (LLNL) is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives. Housed behind Sierra—the world’s 2nd fastest supercomputer—the new tape library helps the Laboratory meet some of the most complex data archiving demands in the world and offers the speed, agility, and capacity required to take LLNL into the exascale era