Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Prototype of Fugaku Supercomputer reaches Number One on Green500

Today Fujitsu announced that a prototype of the Fugaku supercomputer being jointly developed by the two parties took No.1 in the Green500, a global ranking based on the energy efficiency of supercomputers. With Fugaku, we succeeded in developing a general-purpose Arm CPU with the world’s highest energy efficiency, far exceeding our targets through Co-design,” said Satoshi Matsuoka, Director, Riken-Center for Computational Science (R-CCS).

Aspen Systems to boost performance of NASA NCCS Discover Supercomputer

Today Aspen Systems announced that NASA Center for Climate Simulation (NCCS) at NASA’s Goddard Space Flight Center in Greenbelt, MD has increased the computational power to their primary computing platform the Discover supercomputer by over an incredible 30%. The project Scalable Unit 15 (SCU15) was awarded to, built by and installed by Aspen Systems, Inc., in order to add a 25,600-core scalable compute unit onto the system for increased data analysis on our climate through visualized data modeling.

Lenovo and Intel team up for Harvard Supercomputer and New Exascale Visionary Council

Today Lenovo announced the deployment of Cannon, Harvard University’s first liquid-Cooled supercomputer. Developed in cooperation with Intel, the new system’s advanced supercomputing infrastructure will enable discoveries into areas such earthquake forecasting, predicting the spread of disease, and star formation. In related news, Lenovo and Intel announced the creation of an exascale visionary council called Project Everyscale. The project mission is to enable broad adoption of exascale-focused technologies for organizations of all sizes.

HPE Tackles AI Ops R&D for Energy Efficiency, Sustainability and Resiliency in Data Centers

Today HPE announced an AI Ops R&D collaboration with NREL to develop AI and Machine Learning technologies to automate and improve operational efficiency, including resiliency and energy usage, in data centers for the exascale era. The effort is part of NREL’s ongoing mission as a world leader in advancing energy efficiency and renewable energy technologies to create and implement new approaches that reduce energy consumption and lower operating costs.

Dr. Eng Lim Goh on Swarm Learning and Steering HPC Simulation with AI

In this fireside chat from SC19, Dr. Eng Lim Goh from HPE describes how the convergence of HPC and AI is changing the way scientists and engineers do their simulations. He also cites a case study of “Swarm Learning” where hospitals were able to train AI diagnostic models without sharing private patient data. Transcript: insideHPC: […]

World’s Fastest Supercomputers Look Familiar on November TOP500 List

Today marked the release of the 54th edition of the TOP500 list of the world’s fastest supercomputers. In summary, the top of the list remains largely unchanged. In fact, the top 10 systems are unchanged from the previous list. “The latest TOP500 list saw China and the US maintaining their dominance of the list, albeit in different categories. Meanwhile, the aggregate performance of the 500 systems, based on the High Performance Linpack (HPL) benchmark, continues to rise and now sits at 1.66 exaflops. The entry level to the list has risen to 1.14 petaflops, up from 1.02 petaflops in the previous list in June 2019.”

HPE and Cray Unveil HPC and AI Solutions Optimized for the Exascale Era

Today HPE announced it will deliver the industry’s most comprehensive HPC and AI portfolio for the exascale era, which is characterized by explosive data growth and new converged workloads such as HPC, AI, and analytics. “The addition of Cray, Inc., which HPE recently acquired, bolsters HPE’s HPC and AI solutions to now encompass an end-to-end supercomputing architecture across compute, interconnect, software, storage and services, delivered on premises, hybrid or as-a-Service. Now every enterprise can leverage the same foundational HPC technologies that power the world’s fastest systems, and integrate them into their data centers to unlock insights and fuel new discovery.”

Intel Unveils New GPU Architecture and oneAPI Software Stack for HPC and AI

Today at SC19, Intel unveiled its new GPU architecture optimized for HPC and AI as well as an ambitious new software initiative called oneAPI that represents a paradigm shift from today’s single-architecture, single-vendor programming models. “HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep learning NNPs which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”

Slidecast: Dell EMC Using Neural Networks to “Read Minds”

In this slidecast, Luke Wilson from Dell EMC describes a case study with McGill University using neural networks to read minds. “If you want to build a better neural network, there is no better model than the human brain. In this project, McGill University was running into bottlenecks using neural networks to reverse-map fMRI images. The team from the Dell EMC HPC & AI Innovation Lab was able to tune the code to run solely on Intel Xeon Scalable processors, rather than porting to the university’s scarce GPU accelerators.”

Podcast: SC19 Student Cluster Competition Preview

In this podcast, the Radio Free HPC team catches up with Jessi Lanum, a veteran of the SC19 Student Cluster Competition, for an insider peek on what it’s like to compete for cluster competition glory. “For the few of you who are not already fans of these events, here’s the lowdown: 16 student teams representing universities from around the world have been working their brains out designing, building, and tuning clusters provided by their sponsors. They can use as much hardware as they want, the only limitation is that their systems can’t use more than 3,000 watts during the competition.”