Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Accelerate Your Applications with ROCm

In this sponsored post by our friends over at AMD, discuss how the ROCm platform is designed so that a wide range of developers can develop accelerated applications. An entire eco-system has been created, allowing developers to focus on developing their leading-edge applications.

Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

insideHPC Special Report Optimize Your WRF Applications – Part 3

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Accelerate Your Development of GPU Based Innovative Applications

In this sponsored post by our friends over at AMD, we take a deep into how GPUs have become an essential component of innovative organizations that require the highest performing clusters, whether one server or thousands of servers. Many High-Performance Computing (HPC) and Machine Learning (ML) applications have demonstrated tremendous performance gains by using one or more GPUs in conjunction with powerful CPUs. Over 25% of the Top500 list of the most powerful supercomputers on this planet use accelerators, specifically GPUs, to achieve teraflop and petaflop speeds.

insideHPC Special Report Optimize Your WRF Applications – Part 2

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Supermicro Contributes to the MN-3 Supercomputer Earning #1 on Green500 List

Supermicro and Preferred Networks (PFN) collaborated to develop the most efficient supercomputer anywhere on earth, earning the #1 position on the Green500 list. This supercomputer, the MN-3, is comprised of Intel® Xeon® CPUs and MN-Core™ boards developed by Preferred Networks. In this white paper, read more about this collaboration and how a record-setting supercomputer was developed.

The Race for a Unified Analytics Warehouse

This white paper, “The Race for a Unified Analytics Warehouse,” from our friends over at Vertica discusses how the race for a unified analytics warehouse is on. The data warehouse has been around for almost three  decades. Shortly after big data platforms were introduced in the late 2000s, there was talk that the data  warehouse was dead—but it never went away. When big data platform vendors realized that the data warehouse was here to stay, they started building databases on top of their file system and conceptualizing a  data lake that would replace the data warehouse. It never did.

What Do You Mean “What’s My Workload?” I Have Hundreds of Them!

In this sponsored post, Curtis Anderson, Senior Software Architect at Panasas, Inc., takes a look at what Panasas is calling Dynamic Data Acceleration (DDA) and how it dramatically improves HPC performance in a mixed-workload environment. DDA is a new, proprietary software feature of the Panasas PanFS® parallel file system that utilizes a carefully architected combination of technologies to get the most out of all the storage devices in the subsystem.

The Hyperion-insideHPC Interviews: ORNL Distinguished Scientist Travis Humble on Coupling Classical and Quantum Computing

Oak Ridge National Lab’s Travis Humble has worked at the headwaters of quantum computing research for years. In this interview, he talks about his particular areas of interest, including integration of quantum computing with classical HPC systems.  “We’ve already recognized that we can accelerate solving scientific applications using quantum computers,” he says. “These demonstrations are just early examples of how we expect quantum computers can take us to the most challenging problems for scientific discovery.”

Intelligent Fabrics for the Next Wave of AI Innovation

In this sponsored post, our friend John Spiers, Chief Strategy Officer at Liqid, discusses how resource utilization and the soaring costs surrounding it are a constant push and pull issue for IT Departments. Now with the emergence of AI and machine learning, resource utilization is far more front and center than it has ever been. Managing legacy hardware in a hyperconverged environment just as you always have is not going to cut it, because the people and hardware costs associated with these extremely heavy workloads are tremendous. Intelligent fabrics and composable infrastructure software deliver a solution to the problem that allows IT providers the ability to pool and deploy their hardware resources to adapt to the workload at hand, then re-deploy as required for a balanced system that can address the demands of AI and machine learning.