Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Hyperion Research Announces Special June HPC User Forum: Exascale at Oak Ridge

ST. PAUL, Minn., May 5, 2022 — HPC industry analyst firm Hyperion Research today announced a special HPC User Forum to be held June 21-22, 2022 hosted onsite by the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL). According to Hyperion Research CEO Earl Joseph, “The special June 2022 HPC User Forum will […]

Post-Exascale Fabric: NNSA Awards Cornelis Networks $18M for High Performance Network R&D

Those who gave up for dead Intel’s Omni-Path fabric, which Intel began working on 2012 and stopped supporting seven years later, may want to re-think that. Cornelis Networks, the company breathing life into Omni-Path since 2020, has won an $18 million R&D contract from the U.S. National Nuclear Security Administration (NNSA). The award is part of DOE’s Next-Generation High Performance Computing Network (NG-HPCN) project….

@HPCpodcast: Exascale in China and a Philosophical Turn on Riding Advanced Tech to Super-wealth and Media Power

The mystery shrouded in a riddle that is the state of exascale supercomputing in China is the main topic of this week’s @HPCpodcast episode. Setting off a re-focus on the recurring topic: the release of a research paper from PRC university scientists on the use of the Sunway TaihuLight system, the no. 4-ranked supercomputer on the Top500 list,  in “quantum many-body problems,” which are problems of extreme complexity and scale.

It’s Time to Resolve the Root Cause of Congestion

[SPONSORED POST] In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion. Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.

Changes Afoot at Oak Ridge Leadership Computing Facility, Exascale Computing Project

Change is afoot at the Exascale Computing Project (ECP) and at the Oak Ridge Leadership Computing Facility (OLCF). Those who listened to Jeff Nichols’ appearance this month on the @HPCpodcast know about the upcoming retirement of Nichols, who is associate director of Oak Ridge National Laboratory with oversight over the National Center for Computational Sciences […]

Exascale in China? 40 Million Cores Used for Many-Body Quantum Simulation

For several years, some in the HPC community have suspected China of sandbagging the world on its true supercomputing capabilities. Those suspicions may have been confirmed with the publication of a research paper last week in which Chinese university researchers reported that 40 million heterogeneous cores within China’s Sunway supercomputer have been directed at a […]

Collaboration Reports Milestone for Neutral Atom Quantum Computing

BOULDER, CO, April 20, 2022 — ColdQuanta, Riverlane and the University of Wisconsin–Madison, today announced they have successfully run a quantum algorithm on a cold atom qubit array system, codenamed “AQuA,” which the three companies say is an industry first “that brings quantum computing one step closer to real world applications.” The milestone was conducted at the University of Wisconsin–Madison […]

ExaIO: Access and Manage Storage of Data Efficiently and at Scale on Exascale Systems

As the word exascale implies, the forthcoming generation exascale supercomputer systems will deliver 1018 flop/s of scalable computing capability. All that computing capability will be for naught if the storage hardware and I/O software stack cannot meet the storage needs of applications running at scale—leaving applications either to drown in data when attempting to write to storage or starve while waiting to read data from storage. Suren Byna, PI of the ExaIO project in the Exascale Computing Project (ECP) and computer staff scientist at Lawrence Berkeley National Laboratory, highlights the need for preparation to address the I/O needs of exascale supercomputers by noting that storage is typically the last subsystem available for testing on these systems.

Quantinuum Announces Quantum Volume 4096 Achievement  

Quantum computing development company Quantinuum announced that the System Model H1-2 doubled its performance “to become the first commercial quantum computer to pass Quantum Volume 4096, a benchmark introduced by IBM in 2019 to measure the overall capability and performance of quantum computers.”   It marks the sixth time in two years that Quantinuum’s H-Series hardware, […]

@HPCpodcast: What’s New in HPC-class Storage and a New Feature: Top of the News

Join us for this episode – episode 20, be it noted – of the @HPCpodcast. It includes a new segment, Top of The News, offering a look at the top HPC developments of the week. Our discussion features federal funding for PsiQuantum and Global Foundries’ quantum computing research in upstate New York, along with AMD’s proposed acquisition of Pensando and Fujitsu’s new HPC cloud offerings that includes supercomputing technology used in the world’s most powerful HPC system, Fugaku.