Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

HPE at ISC: ‘Perform Like a Supercomputer, Run Like a Cloud’

Since we last saw HPE at ISC a year ago, the company embarked on a strong run of success in HPC and supercomputing – successes the company will no doubt be happy to discuss at virtual ISC 2021. This being the year of exascale, HPE is likely to put toward the top of its list […]

6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system […]

Meet the Frontier Exascale Supercomputer: How Big Is a Quintillion?

Are all comparisons so odious, really? Some can illuminate, some can awe. HPE-Cray has put out an infographic about its Frontier exascale supercomputer, the U.S.’s first, scheduled to be shipped to Oak Ridge National Laboratory later this year. It’s got interesting comparisons that shed light on how big a quintillion is. Make that 1.5 quintillion, […]

NOAA Upgrades Global Weather Model Following 2020 HPC Additions

The U.S. National Oceanic and and Atmospheric Administration (NOAA) said today it is upgrading its Global Forecast System (GFS) weather model to improve hurricane genesis forecasting, modeling for snowfall location, heavy rainfall forecasts and overall model performance. In February 2020, NOAA announced it will triple if its weather and climate supercomputing capacity with the addition […]

HPE to Build Research Supercomputer for Sweden’s KTH Royal Institute of Technology

HPE’s string of HPC contract wins has continued with the company’s announcement today that it’s building a supercomputer for KTH Royal Institute of Technology (KTH) in Stockholm. Funded by Swedish National Infrastructure for Computing (SNIC), the HPE Cray EX system will target modeling and simulation in academic pursuits and industrial areas, including drug design, renewable energy […]

Hyperion HPC User-Buyer Study: Demand for Sim-Analytics Systems, a Throughput Boom, FPGAs and AMD GPUs on the Move and Other Findings

Industry analyst firm Hyperion Research has completed its latest study of high performance computing buyers and users, it’s first since 2017, and the report reveals a quickly evolving and innovating industry in which, among other findings, end users are figuring out how to leverage the variety of compute architectures while also calling for HPC systems […]

Reading the Intel Tea Leaves: Pat Gelsinger’s HPC Paradox

As he takes charge of Intel, CEO Pat Gelsinger faces a paradox: his new company is both troubled and a revenue geyser; if Intel is to continue its historical growth rates, he’ll need the skills of a corporate turnaround artist. These contradictions surely apply to Intel’s position in HPC/AI/data center server processors, where the company […]

CoolIT Renews Commitment with HPE Cray EX Supercomputers and HPE Apollo 20 Systems

Calgary, Alberta. November 3rd, 2020 – CoolIT Systems, maker of scalable direct liquid cooling technology (DLC) for desktop and data centers systems, announces continued participation with exascale and high performance computing vendor Hewlett Packard Enterprise (HPE) on multiple liquid cooling programs. Building on a foundation and history of cooperation, HPE and CoolIT have renewed their commitment to […]

Los Alamos Stands up HPE Cray EX for COVID-19 Fight

Los Alamos National Laboratory reported it has completed the installation of “Chicoma,” based on AMD EPYC CPUs and the HPE Cray EX supercomputer architecture. The HPC platform is aimed at enhancing the lab’s R&D efforts in support of COVID-19 research. Chicoma is an early deployment of HPE Cray EX, which offers a large-scale system architecture […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….