Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GPU Hackathon gears up for Future Perlmutter Supercomputer

NERSC recently hosted its first user hackathon to begin preparing key codes for the next-generation architecture of the Perlmutter system. Over four days, experts from NERSC, Cray, and NVIDIA worked with application code teams to help them gain new understanding of the performance characteristics of their applications and optimize their codes for the GPU processors in Perlmutter. “By starting this process early, the code teams will be well prepared for running on GPUs when NERSC deploys the Perlmutter system in 2020.”

HPE to Acquire Cray for $1.3 Billion

Today HPE announced that the company has entered into a definitive agreement to acquire Cray for approximately $1.3 billion. “This pending deal will bring together HPE, the global HPC market leader, and Cray, whose Shasta architecture is under contract to power America’s two fastest supercomputers in 2021,” said Steve Conway from Hyperion Research. “The Cray addition will boost HPE’s ability to pursue high-end procurements and will speed the combined company’s development of next-generation technologies that will benefit HPC and AI-machine learning customers at all price points.”

Epic 2018 HPC Road Trip begins at Idaho National Lab

In this special guest feature, Dan Olds from OrionX begins his Epic HPC Road Trip series with a stop at Idaho National Laboratory. “The fast approach of SC18 gave me an idea: why not drive from my home base in Beaverton, Oregon, to Dallas, Texas and stop at national labs along the way? I love a good road trip and what could be better than a 5,879 mile drive with visits to supercomputer users mixed in?”

Cray Powers Weather Forecasting at ZAMG in Austria

Today Cray announced that the Central Institution for Meteorology and Geodynamics in Austria (ZAMG) is using a Cray supercomputer to support a multi-year weather nowcasting project with the University of Vienna to benefit society and industry. “Using deep learning methods, ZAMG is leveraging its Cray CS-Storm supercomputer to optimize the orientation of wind-powered generators for maximum efficiency and to train neural networks with current and historical weather data.”

AMD to Power Exascale Cray System at ORNL

Today AMD announced a new exascale-class supercomputer to be delivered to ORNL in 2021. Built by Cray, the “Frontier” system is expected to deliver more than 1.5 exaFLOPS of processing performance on AMD CPU and GPU processors to accelerate advanced research programs addressing the most complex compute problems. “The combination of a flexible compute infrastructure, scalable HPC and AI software, and the intelligent Slingshot system interconnect will enable Cray customers to undertake a new age of science, discovery and innovation at any scale.”

Agenda Posted for LUG 2019 in Houston

The Lustre User Group has posted their speaker agenda for LUG 2019. The event takes place May 14-17 in Houston. “LUG 2019 is the industry’s primary venue for discussion and seminars on the Lustre parallel file system and other open source file system technologies. Don’t miss your chance to actively participate in industry dialogue on best practices and emerging technologies, explore upcoming developments of the Lustre file system, and immerse in the strong Lustre community.”

Video: Cray Announces First Exascale System

In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.”

NERSC taps NVIDIA compiler team for Perlmutter Supercomputer

NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer. “We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,” said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals.”

Video: Intel and Cray to Build First USA Exascale Supercomputer for DOE in 2021

Today Intel announced plans to deliver the first exaflop supercomputer in the United States. The Aurora supercomputer will be used to dramatically advance scientific research and discovery. The contract is valued at more than $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray in 2021. “Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO.”

Video: Solving I/O Slowdown and the “Noisy Neighbor” Problem

John Fragalla from Cray gave this talk at the Rice Oil & Gas Conference. “In Oil and Gas, when using shared storage, mixed workloads can have a big impact on I/O performance causing considerable slowdown when running small I/O alongside large I/O on the same storage system. In this presentation, Cray will share real benchmark results on the impacts of the “Noisy Neighbor” application has on sequential I/O, and with the right storage tuning and flash capacity, how to optimize the storage to meet the demanding workloads of Oil and Gas to accelerate performance for a mixed workload environment.”