Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Kathy Yelick to Keynote ACM Europe Conference

Kathy Yelick from LBNL will give the HPC keynote on Exascale computing at the upcoming ACM Europe Conference. With main themes centering on Cybersecurity and High Performance Computing, the event takes place Sept. 7-8 in Barcelona.

DOE Awards 1 Billion Hours of Supercomputer Time for Research

The DOE has awarded 1 Billion CPU hours of compute time on Oak Ridge supercomputers to a set important research projects vital to our nation’s future. ALCC allocations for 2017 continue in the tradition of innovation and discovery with projects awards ranging from 2 million to 300 million processor hours.

HPC Analyst Crossfire at ISC 2017

In this video from ISC 2017, Addison Snell from Intersect360 Research fires back at industry leaders with hard-hitting questions about the state of the HPC industry. “Listen in as visionary leaders from the supercomputing community comment on forward-looking trends that will shape the industry this year and beyond.”

Beating Floating Point at its own game: Posit Arithmetic

“Dr. Gustafson has recently finished writing a book, The End of Error: Unum Computing, that presents a new approach to computer arithmetic: the unum. The universal number, or unum format, encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This approach obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power.”

ORNL Taps D-Wave for Exascale Computing

announced they’re bringing on D-Wave to use quantum computing as an accelerator for future exascale applications. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Agenda Posted for September HPC User Forum in Milwaukee

Hyperion Research has posted the preliminary agenda for the HPC User Forum Sept. 5-7 in Milwaukee, Wisconsin. “The HPC User Forum community includes thousands of people from the steering committee, member organizations, sponsors and everyone who has attended an HPC User Forum meeting. Our mission is to promote the health of the global HPC industry and address issues of common concern to users.”

Developing a Software Stack for Exascale

In this special guest feature, Rajeev Thakur from Argonne describes why Exascale would be a daunting software challenge even if we had the hardware today. “The scale makes it complicated. And we don’t have a system that large to test things on right now.” Indeed, no such system exists yet, the hardware is changing, and a final vendor or possibly multiple vendors to build the first exascale systems have not yet been selected.”

How HPE is Approaching Exascale with Memory-Driven Computing

In this video from ISC 2017, Mike Vildibill describes how Hewlett Packard Enterprise describes why we need Exascale and how the company is pushing forward with Memory-Driven Computing. “At the heart of HPE’s exascale reference design is Memory-Driven Computing, an architecture that puts memory, not processing, at the center of the computing platform to realize a new level of performance and efficiency gains. HPE’s Memory-Driven Computing architecture is a scalable portfolio of technologies that Hewlett Packard Labs developed via The Machine research project. On May 16, 2017, HPE unveiled the latest prototype from this project, the world’s largest single memory computer.”

DEEP-EST Project Looks to Building-blocks for Exascale

The DEEP exascale research computing project has entered its next phase with launch of the DEEP-EST project at the Jülich Supercomputing Center in Germany. “The optimization of homogeneous systems has more or less reached its limit. We are gradually developing the prerequisites for a highly efficient modular supercomputing architecture which can be flexibly adapted to the various requirements of scientific applications,” explains Prof. Thomas Lippert, head of the Jülich Supercomputing Centre (JSC).

How Zettar Transferred 1 Petabyte of Data in Just 34 Hours Using AIC Servers

In the world of HPC, moving data is a sin. That may be changing. “Just a few weeks ago, AIC announced the successful completion of a landmark, 1-petabyte transfer of data in 34 hours, during a recent test by Zettar that relied on the company’s SB122A-PH, 1U 10-bay NVMe storage server. The milestone was reached using a unique 5000-mile 100Gbps loop which is a SDN layer over a shared, production 100G network operated by the US DOE’s ESNet.”