Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Right Terminations for Reliable Liquid Cooling in HPC

High performance computing manufacturers are increasingly deploying liquid cooling. To avoid damage to electronic equipment due to leaks, secure drip-free connections are essential. Quick disconnects for HPC applications simplify connector selection. And with expensive electronics at stake, understanding the components in liquid cooling systems is critical. This article details what to look for when seeking the optimal termination for connectors—a way to help ensure leak-free performance.

New Class of Intel Xeon Scalable Processors Break Through Performance Bottlenecks

Unlocking the bigger-picture meaning from raw data volumes is no easy task. Unfortunately, that means that many important insights remain hidden within the untapped data which quietly floods data centers around the globe each day. Today’s advanced applications require faster and increasingly powerful hardware and storage technologies to make sense of the data deluge. Intel seeks to address this critical trend with a new class of future Intel® Xeon® Scalable processors, code-named Cascade Lake.

Parabricks and SkyScale Raise the Performance Bar for Genomic Analysis

“In the modern world of genomics where analysis of 10’s of thousands of genomes is required for research, the cost per genome and the number of genomes per time are critical parameters. Parabricks adaption of the GATK4 Best Practice workflows running seamlessly on SkyScale’s Accelerated Cloud provides unparalleled price and throughput efficiency to help unlock the power of the human genome.”

In-Network Computing Technology to Enable Data-Centric HPC and AI Platforms

Mellanox Technologies’ Gilad Shainer explores one of the biggest tech transitions over the past 20 years: the transition from CPU-centric data centers to data-centric data centers, and the role of in-network computing in this shift. “The latest technology transition is the result of a co-design approach, a collaborative effort to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. As the CPU-centric approach has reached the limits of performance and scalability, the data center architecture focus has shifted to the data, and how to bring compute to the data instead of moving data to the compute.”

HPC and AI Convergence Take Center Stage for Intel at SC18

Intel has Big Plans at SC18 later this month, many of which are focused on HPC and AI convergence, and the intersections between these two sectors. “HPC is expanding beyond its traditional role of modeling and simulation to encompass visualization, analytics, and machine learning. Intel scientists and engineers will be available to discuss how to implement AI capabilities into your current HPC environments and demo how new, more powerful HPC platforms can be applied to meet your computational needs now and in the future.”

How to Control the AI Tsunami

Clients tell us there is a wide range of users beyond data scientists that want to get in on the AI action as well, so we recently updated LiCO with new “Lenovo Accelerated AI” training and inference templates. These templates allow users to simply bring their dataset into LiCO and request cluster resources to train models and run inference without coding.”

Choosing the Right Type of Cloud for HPC Lets Scientists Focus on Science

Naoki Shibata from XTREME-D writes that choosing the right type of cloud computing is key to increasing efficiency. “One challenge that HPC, DA, and DL end users face is to keep focused on their science and engineering and not get bogged down with system administration and platform details when ensuring that they have the clusters they need for their work. It has often been said that if scalable cluster computing can become more turnkey and user-friendly (and less costly), then the market will expand to many new areas.”

Making Storage Bigger on the Inside

This sponsored post from HPE delves into how tools like HPE Data Management Framework are working to make HPC storage “bigger on the inside” and streamline data workflows. “DMF seamlessly moves data between tiers, whether they’re “hot” tiers based on flash storage, “warm” tiers based on hard drives or “cold tiers based on tape.”

The Rising AI Tide in HPC – Are You Ready?

This guest article from Dr. Bhushan Desam, Lenovo’s Director, Global Artificial Intelligence Business covers how new HPC tools like Lenovo’s LiCO (Lenovo Intelligent Computing Orchestration) are working to address the growing popularity of AI in HPC and to simplify the convergence of HPC and AI. 

Video: The March to Exascale

As the trend toward exascale HPC systems continues, the complexities of optimizing parallel applications running on them increase too. Potential performance limitations can occur at the application level which relies on the MPI. While small-scale HPC systems are more forgiving of tiny MPI latencies, large systems running at scale prove much more sensitive. Small inefficiencies can snowball into significant lag.