Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted for OpenFabrics Virtual Workshop

The OpenFabrics Alliance (OFA) has opened registration for its OFA Virtual Workshop, taking place June 8-12, 2020. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address today’s challenges. “The OpenFabrics Alliance is committed to accelerating the development of high performance fabrics. This virtual event will provide fabric developers and users an opportunity to discuss emerging fabric technologies, collaborate on future industry requirements, and address challenges.”

Interview: Fighting the Coronavirus with TACC Supercomputers

In this video from the Stanford HPC Conference, Dan Stanzione from the Texas Advanced Computing Center describes how their powerful supercomputers are helping to fight the coronavirus pandemic. “In times of global need like this, it’s important not only that we bring all of our resources to bear, but that we do so in the most innovative ways possible,” said TACC Executive Director Dan Stanzione. “We’ve pivoted many of our resources towards crucial research in the fight against COVID-19, but supporting the new AI methodologies in this project gives us the chance to use those resources even more effectively.”

TYAN Launches AI-Optimized Servers Powered by NVIDIA V100S GPUs

Today TYAN launched their latest GPU server platforms that support the NVIDIA V100S Tensor Core and NVIDIA T4 GPUs for a wide variety of compute-intensive workloads including AI training, inference, and supercomputing applications. “An increase in the use of AI is infusing into data centers. More organizations plan to invest in AI infrastructure that supports the rapid business innovation,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. “TYAN’s GPU server platforms with NVIDIA V100S GPUs as the compute building block enables enterprise to power their AI infrastructure deployment and helps to solve the most computationally-intensive problems.”

Asperitas and Shell to showcase new immersion cooling solutions at OCP Virtual Summit

Immersion cooling specialist Asperitas and Shell will launch Shell Immersion Cooling Fluid S5 X at the Open Compute Project (OCP) Virtual Summit, May 12-15. Asperitas has also added an additional immersion cooling solution to their current portfolio. The solution is making use of the same natural convection driven circulation concept as the solution introduced to the market in 2017, but with increased IT capacity. The solution is designed to address the demand for high density and performance compute across various markets including hyperscale cloud, enterprise class and telecom.

A Data-Centric Approach to Extreme-Scale Ab initio Dissipative Quantum Transport Simulations

Alexandros Ziogas from ETH Zurich gave this talk at Supercomputing Frontiers Europe. “The computational efficiency of a state of the art ab initio #quantum transport (QT) solver, capable of revealing the coupled electro-thermal properties of atomically-resolved nano-transistors, has been improved by up to two orders of magnitude through a data centric reorganization of the application. The approach yields coarse-and fine-grained data-movement characteristics that can be used for performance and communication modeling, communication-avoidance, and dataflow transformations.”

Video: Fighting COVID-19 with HPE’s Sentinel supercomputer through the cloud

At HPE, we believe in being a force for good, so when the COVID-19 pandemic struck, HPE quickly made available supercomputing resources, along with a dedicated technical staff, free of charge to help scientists tackle complex research. That’s when we met Dr. Baudry and set him and his team up on HPE’s Sentinel supercomputer, which can perform 147 trillion floating point operations per second and store 830 terabytes of data. Sentinel – which is as fast as the earth’s entire population performing 20,000 calculations per second – is significantly accelerating discovery and saving months of research time and hundreds of thousands of dollars.

Slidecast: The Sad State of Affairs in HPC Storage (But there is light at the end of the tunnel)

In this video, Robert Murphy from Panasas describes the current state of the HPC storage market and how Panasas is stepping up with high performance products that deliver economical performance without risk. “According to a recent study published by Hyperion Research, total cost of ownership (TCO) now rivals performance as a top criterion for purchasing HPC storage systems. Newly retooled with COTS hardware and a unique architecture, Panasas delivers surprising performance at a lower TCO than competitive solutions.”

NERSC Finalizes Contract for Perlmutter Supercomputer

NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]

The Incorporation of Machine Learning into Scientific Simulations at LLNL

Katie Lewis from Lawrence Livermore National Laboratory gave this talk at the Stanford HPC Conference. “Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness.”

Xilinx Establishes FPGA Adaptive Compute Clusters at Leading Universities

“We will build novel, experimental FPGA-centric compute systems and develop domain-specific compilers and system tools targeting high-performance computing. We will focus on several important application domains, including AI with deep learning, large-scale graph processing, and computational genomics.”