Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Accelerate Your Applications with ROCm

In this sponsored post by our friends over at AMD, discuss how the ROCm platform is designed so that a wide range of developers can develop accelerated applications. An entire eco-system has been created, allowing developers to focus on developing their leading-edge applications.

Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

Fujitsu to Ship 649 TFLOPS System with Fugaku HPC Technology to Canon for ‘No-prototype’ Product Development

Technology from the world’s No. 1 supercomputer, Fugaku, located at the Riken Center for Computational Science in Japan, is making its way into the commercial sphere. Fujitsu Ltd. today announced that Canon, Inc., has ordered a Fujitsu PRIMEHPC FX1000 unit, expected to achieve theoretical computational performance of 648.8 teraflops (TFLOPS). Intended to support Canon’s “no-prototype” […]

insideHPC Special Report Optimize Your WRF Applications – Part 3

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Accelerate Your Development of GPU Based Innovative Applications

In this sponsored post by our friends over at AMD, we take a deep into how GPUs have become an essential component of innovative organizations that require the highest performing clusters, whether one server or thousands of servers. Many High-Performance Computing (HPC) and Machine Learning (ML) applications have demonstrated tremendous performance gains by using one or more GPUs in conjunction with powerful CPUs. Over 25% of the Top500 list of the most powerful supercomputers on this planet use accelerators, specifically GPUs, to achieve teraflop and petaflop speeds.

insideHPC Special Report Optimize Your WRF Applications – Part 2

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Supermicro Contributes to the MN-3 Supercomputer Earning #1 on Green500 List

Supermicro and Preferred Networks (PFN) collaborated to develop the most efficient supercomputer anywhere on earth, earning the #1 position on the Green500 list. This supercomputer, the MN-3, is comprised of Intel® Xeon® CPUs and MN-Core™ boards developed by Preferred Networks. In this white paper, read more about this collaboration and how a record-setting supercomputer was developed.

The Race for a Unified Analytics Warehouse

This white paper, “The Race for a Unified Analytics Warehouse,” from our friends over at Vertica discusses how the race for a unified analytics warehouse is on. The data warehouse has been around for almost three  decades. Shortly after big data platforms were introduced in the late 2000s, there was talk that the data  warehouse was dead—but it never went away. When big data platform vendors realized that the data warehouse was here to stay, they started building databases on top of their file system and conceptualizing a  data lake that would replace the data warehouse. It never did.

What Do You Mean “What’s My Workload?” I Have Hundreds of Them!

In this sponsored post, Curtis Anderson, Senior Software Architect at Panasas, Inc., takes a look at what Panasas is calling Dynamic Data Acceleration (DDA) and how it dramatically improves HPC performance in a mixed-workload environment. DDA is a new, proprietary software feature of the Panasas PanFS® parallel file system that utilizes a carefully architected combination of technologies to get the most out of all the storage devices in the subsystem.

Report: Nvidia on Verge of Arm Acquisition

The Wall Street Journal reported today that Nvidia is close to purchasing British chip designer Arm Holdings from SoftBank Group for more than $40 billion in a cash-and-stock deal, one that has been rumored for several weeks. Citing unnamed sources, the Journal story stated that “a deal could be sealed early next week, the people […]