Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel Unveils New GPU Architecture and oneAPI Software Stack for HPC and AI

Today at SC19, Intel unveiled its new GPU architecture optimized for HPC and AI as well as an ambitious new software initiative called oneAPI that represents a paradigm shift from today’s single-architecture, single-vendor programming models. “HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep learning NNPs which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”

New Cray ClusterStor E1000 to Power Exascale Workloads

Today Cray unveiled its Cray ClusterStor E1000 system, an entirely new parallel storage platform for the Exascale Era. “As the external high performance storage system for the first three U.S. exascale systems, Cray ClusterStor E1000 will total over 1.3 exabytes of storage for all three systems combined. ClusterStor E1000 systems can deliver up to 1.6 terabytes per second and up to 50 million I/O operations per second per rack – more than double compared to other parallel storage systems in the market today.”

Video: Update on the Exascale Computing Project

In this video, ECP Director Doug Kothe provides an update on the Exascale Computing Project. ECP has mission to ensure that a capable exascale computing ecosystem will come to fruition with the arrival of the nation’s first exascale systems. “Enduring legacy translates to having dozens of application technologies that will be used to tackle some of the toughest problems in DOE and the nation, and so the applications are now going to be positioned to address their challenge problems and in many cases help solve them or be a part of the solution.”

ISC 2019 Recap from Glenn Lockwood

In this special guest feature, Glenn Lockwood from NERSC shares his impressions of ISC 2019 from an I/O perspective. “I was fortunate enough to attend the ISC HPC conference this year, and it was a delightful experience from which I learned quite a lot. For the benefit of anyone interested in what they have missed, I took the opportunity on the eleven-hour flight from Frankfurt to compile my notes and thoughts over the week.”

Call for Proposals: Get on Big Iron with the ALCF Data Science Program

The ALCF Data Science Program at Argonne has issued its Call for Proposals. The program aims to accelerate discovery across a broad range of scientific domains which require data-intensive and machine learning algorithms to address challenging research problems. “Ongoing and past ADSP projects span a diverse range of science domains, e.g. Materials, Imaging, Neuroscience, Engineering, Combustion/CFD, Cosmology; and involve large science collaborations.”

The Pending Age of Exascale

In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.”

Podcast: Intel to Deliver Exascale for the Advancement of Science

In this Chip Chat podcast, Trish Damkroger from Intel outlines a few of the key technologies coming to the Aurora supercomputer in 2021. To enable Exascale levels of performance, Aurora will be built with a future generation Intel Xeon Scalable processor, the recently announced Intel Xe compute architecture, and Intel Optane DC persistent memory. Built by subcontractor Cray, Aurora will enable ground-breaking science such as precision medicine, climate modeling, weather forecasting, and materials science.

Video: Cray Announces First Exascale System

In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.”

Video: Intel and Cray to Build First USA Exascale Supercomputer for DOE in 2021

Today Intel announced plans to deliver the first exaflop supercomputer in the United States. The Aurora supercomputer will be used to dramatically advance scientific research and discovery. The contract is valued at more than $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray in 2021. “Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO.”

ALCF – The March toward Exascale

David E. Martin gave this talk at the HPC User Forum. “In 2021, the Argonne Leadership Computing Facility (ALCF) will deploy Aurora, a new Intel-Cray system. Aurora, will be capable of over 1 exaflops. It is expected to have over 50,000 nodes and over 5 petabytes of total memory, including high bandwidth memory. The Aurora architecture will enable scientific discoveries using simulation, data and learning.”