Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Former Intel HPC Leader Trish Damkroger Joins HPE as Chief Product Officer for HPC and AI

Damkroger joins HPE at a pivotal time for the company’s HPC organization as its customers at three US Department of Energy national laboratories transition to the exascale era — supercomputers capable of 10(18) calculations per second. As Hotard stated in his blog, “HPE is at the forefront of making exascale computing, a technological magnitude that will deliver 10X faster performance than the majority of today’s most powerful supercomputers, a soon-to-be reality. With the upcoming U.S. Department of Energy’s exascale system, Frontier, an HPE Cray EX supercomputer that will be hosted at Oak Ridge National Laboratory, we are unlocking a new world of supercomputing.”

Preparing for Exascale: Aurora to Drive Brain Map Construction

The U.S. Department of Energy’s Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare codes for the architecture and scale of the system, 15 research teams are taking part in the Aurora Early Science Program through the Argonne Leadership Computing Facility (ALCF), a […]

Let’s Talk Exascale Code Development: WDMAPP—XGC, GENE, GEM

This is episode 82 of the Let’s Talk Exascale podcast, provided by the Department of Energy’s Exascale Computing Project, exploring the expected impacts of exascale-class supercomputing. This is the third in a series on sharing best practices in preparing applications for the upcoming Aurora exascale supercomputer at the Argonne Leadership Computing Facility. The series highlights […]

Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

Video: Preparing to program Aurora at Exascale – Early experiences and future directions

Hal Finkel from Argonne gave this talk at IWOCL / SYCLcon 2020. “Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date.”

Scientists Look to Exascale and Deep Learning for Developing Sustainable Fusion Energy

Scientists from Princeton Plasma Physics Laboratory are leading an Aurora ESP project that will leverage AI, deep learning, and exascale computing power to advance fusion energy research. “With a suite of the world’s most powerful path-to-exascale supercomputing resources at their disposal, William Tang and colleagues are developing models of disruption mitigation systems (DMS) to increase warning times and work toward eliminating major interruption of fusion reactions in the production of sustainable clean energy.”

Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale

Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution. The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads.”

Altair PBS Works Steps Up to Exascale and the Cloud

In this video from SC19, Sam Mahalingam from Altair describes how the company is enhancing PBS Works software to ease the migration of HPC workloads to the Cloud. “Argonne National Laboratory has teamed with Altair to implement a new scheduling system that will be employed on the Aurora supercomputer, slated for delivery in 2021. PBS Works runs big — 50,000 nodes in one cluster, 10,000,000 jobs in a queue, and 1,000 concurrent active users.”

Intel HPC Devcon Keynote: Exascale for Everyone

The convergence of HPC and AI is driving a paradigm shift in computing. Learn about Intel’s software-first strategy to further accelerate this convergence and extend the boundaries of HPC as we know it today. oneAPI will ease application development and accelerate innovation in the xPU era. Intel delivers a diverse mix of scalar, vector, spatial, and matrix architectures deployed in a range of silicon platforms (such as CPUs, GPUs, FPGAs), and specialized accelerators—each being unified by an open, industry-standard programming model. The talk concludes with innovations in a new graphics architecture and the capabilities it will bring to the Argonne exascale system in 2021.

Intel Unveils New GPU Architecture and oneAPI Software Stack for HPC and AI

Today at SC19, Intel unveiled its new GPU architecture optimized for HPC and AI as well as an ambitious new software initiative called oneAPI that represents a paradigm shift from today’s single-architecture, single-vendor programming models. “HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep learning NNPs which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”