Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ORNL Taps D-Wave for Exascale Computing

announced they’re bringing on D-Wave to use quantum computing as an accelerator for future exascale applications. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Agenda Posted for September HPC User Forum in Milwaukee

Hyperion Research has posted the preliminary agenda for the HPC User Forum Sept. 5-7 in Milwaukee, Wisconsin. “The HPC User Forum community includes thousands of people from the steering committee, member organizations, sponsors and everyone who has attended an HPC User Forum meeting. Our mission is to promote the health of the global HPC industry and address issues of common concern to users.”

Developing a Software Stack for Exascale

In this special guest feature, Rajeev Thakur from Argonne describes why Exascale would be a daunting software challenge even if we had the hardware today. “The scale makes it complicated. And we don’t have a system that large to test things on right now.” Indeed, no such system exists yet, the hardware is changing, and a final vendor or possibly multiple vendors to build the first exascale systems have not yet been selected.”

How HPE is Approaching Exascale with Memory-Driven Computing

In this video from ISC 2017, Mike Vildibill describes how Hewlett Packard Enterprise describes why we need Exascale and how the company is pushing forward with Memory-Driven Computing. “At the heart of HPE’s exascale reference design is Memory-Driven Computing, an architecture that puts memory, not processing, at the center of the computing platform to realize a new level of performance and efficiency gains. HPE’s Memory-Driven Computing architecture is a scalable portfolio of technologies that Hewlett Packard Labs developed via The Machine research project. On May 16, 2017, HPE unveiled the latest prototype from this project, the world’s largest single memory computer.”

DEEP-EST Project Looks to Building-blocks for Exascale

The DEEP exascale research computing project has entered its next phase with launch of the DEEP-EST project at the Jülich Supercomputing Center in Germany. “The optimization of homogeneous systems has more or less reached its limit. We are gradually developing the prerequisites for a highly efficient modular supercomputing architecture which can be flexibly adapted to the various requirements of scientific applications,” explains Prof. Thomas Lippert, head of the Jülich Supercomputing Centre (JSC).

How Zettar Transferred 1 Petabyte of Data in Just 34 Hours Using AIC Servers

In the world of HPC, moving data is a sin. That may be changing. “Just a few weeks ago, AIC announced the successful completion of a landmark, 1-petabyte transfer of data in 34 hours, during a recent test by Zettar that relied on the company’s SB122A-PH, 1U 10-bay NVMe storage server. The milestone was reached using a unique 5000-mile 100Gbps loop which is a SDN layer over a shared, production 100G network operated by the US DOE’s ESNet.”

Dr. Eng Lim Goh on HPE’s Recent PathForward Award for Exascale Computing

In this video from ISC 2017, Dr. Eng Lim Goh from HPE discusses the company’s recent PathForward award as well as the challenges of designing energy efficient Exascale systems. After that, he gives his unique perspective on HPE’s “The Machine” architecture for memory-driven computing. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand.”

Is Aurora Morphing into an Exascale AI Supercomputer?

The recently published Department of Energy FY 2018 Congressional Budget Request has raised a lot of questions about the Aurora supercomputer that was scheduled to be deployed at Argonne ALCF next year. “As we covered in our Radio Free HPC podcast, Aurora appears to be morphing into a very different kind of machine.”

Video: DoE Taps HPE Memory-Driven Computing for Exascale

Today Hewlett Packard Enterprise announced it has been awarded a research grant from the DoE to develop a reference design for an exascale supercomputer. “Our novel Memory-Driven Computing architecture combined with our deep expertise in HPC and robust partner ecosystem uniquely positions HPE to develop the first U.S. exascale supercomputer and deliver against the PathForward program’s goals.”

Podcast: DoE Awards $258 Million for Exascale to U.S. HPC Vendors

Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers. “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”