Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

ALCF Starts Aurora Learning Path Series May 17

The Argonne Leadership Computing Facility will begin a new Aurora Learning Paths series on Wednesday, May 17 from 1:30-3:30 pm CT. Two more sessions will be held on June 14 and July 12. Registration is here. The three-part series will go into detail about how to apply key Intel architectural innovations and libraries via smart […]

insideHPC-Hyperion Research Interview: Argonne’s Rick Stevens on the Future of Everything – U.S. Post-Exascale Strategy, AI for Science, HPC in 2040 and an Aurora Install Update

In this interview conducted on behalf of HPC analyst firm Hyperion Research, we spoke with Argonne National Laboratory’s Rick Stevens about the present and future of HPC. The starting point for this conversation is a presentation Stevens gave at a Hyperion event in Washington related to implementation of the CHIPS and Science Act and includes his insights on the post-exascale build-out of an integrated network of U.S. supercomputing capacity (the Integrated Research Infrastructure, or IRI). We then look at AI for science and the use of data-driven modeling and simulation, which shows the potential to deliver major performance gains for researchers….

Intel Alters HPC-AI Roadmap: ‘Rialto Bridge’ GPU Discontinued

After business hours on Friday, Intel released information on a “streamlined and simplified” data center GPU roadmap with direct impact on HPC and AI. The new plan calls for the discontinuation of the “Rialto Bridge” GPU, which was to have succeeded the Ponte Vecchio chip that itself was delayed several years before shipments began last […]

Aurora on Schedule? Intel Says it’s Shipping Ponte Vecchio-Sapphire Rapids Blades to Argonne

The rumors had begun to cirulate – October is near, that starts the fourth quarter, 2023 isn’t far behind, all of which means Intel is coming up against a hard deadline to deliver its delayed Aurora exascale-class supercomputer to Argonne National Laboratory by the end of the year. Is another delay in the offing?
Then, yesterday, Intel tweeted this out: “Server blades with Intel 4th Gen Xeon and Ponte Vecchio, which uses Intel’s most advanced IP and packaging technology, are now shipping to Argonne National Labs to power the Aurora supercomputer!” And the tweet was backed by comments to the same effect from CEO Pat Gelsinger

Former Intel HPC Leader Trish Damkroger Joins HPE as Chief Product Officer for HPC and AI

Damkroger joins HPE at a pivotal time for the company’s HPC organization as its customers at three US Department of Energy national laboratories transition to the exascale era — supercomputers capable of 10(18) calculations per second. As Hotard stated in his blog, “HPE is at the forefront of making exascale computing, a technological magnitude that will deliver 10X faster performance than the majority of today’s most powerful supercomputers, a soon-to-be reality. With the upcoming U.S. Department of Energy’s exascale system, Frontier, an HPE Cray EX supercomputer that will be hosted at Oak Ridge National Laboratory, we are unlocking a new world of supercomputing.”

Preparing for Exascale: Aurora to Drive Brain Map Construction

The U.S. Department of Energy’s Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare codes for the architecture and scale of the system, 15 research teams are taking part in the Aurora Early Science Program through the Argonne Leadership Computing Facility (ALCF), a […]

Let’s Talk Exascale Code Development: WDMAPP—XGC, GENE, GEM

This is episode 82 of the Let’s Talk Exascale podcast, provided by the Department of Energy’s Exascale Computing Project, exploring the expected impacts of exascale-class supercomputing. This is the third in a series on sharing best practices in preparing applications for the upcoming Aurora exascale supercomputer at the Argonne Leadership Computing Facility. The series highlights […]

Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

Video: Preparing to program Aurora at Exascale – Early experiences and future directions

Hal Finkel from Argonne gave this talk at IWOCL / SYCLcon 2020. “Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date.”

Scientists Look to Exascale and Deep Learning for Developing Sustainable Fusion Energy

Scientists from Princeton Plasma Physics Laboratory are leading an Aurora ESP project that will leverage AI, deep learning, and exascale computing power to advance fusion energy research. “With a suite of the world’s most powerful path-to-exascale supercomputing resources at their disposal, William Tang and colleagues are developing models of disruption mitigation systems (DMS) to increase warning times and work toward eliminating major interruption of fusion reactions in the production of sustainable clean energy.”