Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Graphcore Second Generation IPU

Our friends over at Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has released a new whitepaper introducing the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. This paper will explores the new platform and assess its strengths and weaknesses compared to the growing cadre of potential competitors.

Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new  environment’s required components, implementing a pilot program to learn about the system’s future  performance, and considering eventual scaling to production levels.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

Unleash the Future of Innovation with HPC & AI

This whitepaper reviews how cutting-edge solutions from Supermicro and NVIDIA are enabling customers to transform and capitalize on HPC and AI innovation. Data is the driving force for success in the global marketplace. Data volumes are erupting in size and complexity as organizations work to collect, analyze, and derive intelligence from a growing number of sources and devices. These workloads are critical to powering applications that translate insight into business value.

Deep Learning GPU Cluster

In this whitepaper, our friends over at Lambda walk you through the Lambda Echelon multi-node cluster reference design: a node design, a rack design, and an entire cluster level architecture. This document is for technical decision-makers and engineers. You’ll learn about the Echelon’s compute, storage, networking,  power distribution, and thermal design. This is not a cluster administration handbook, this is a high level technical overview of one possible system architecture.

Massive Scalable Cloud Storage for Cloud Native Applications

In this comprehensive technology white paper, written by Evaluator Group, Inc. on behalf of Lenovo, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utliized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.

insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads

Not too long ago, building a converged HPC/AI environment – with two domains: High Performance Computing (HPC) and Artificial Intelligence (AI) – would require spending a lot of money on proprietary systems and software with the hope that it would scale as business demands changed. In this insideHPC technology guide, as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.

Using Workstations To Reshape Your Artificial Intelligence Infrastructure

The study results summarized in this white paper show that firms are already using workstations to lower the cost, increase the security, and speed up their AI infrastructure. The addition of workstations into a firms AI workflow allows servers and cloud platforms to be tasked with business cases that require more robust computing while workstations take on tasks with longer time frames and smaller budgets.

A Checklist For Artificial Intelligence On Workstations

Firms of all sizes are leveraging workstations as part of their artificial intelligence (AI) workflows. In the past, many firms relied on highly-scaled servers in data centers or private/public cloud infrastructure to run their AI  applications. However, the results of a recent survey commissioned by Dell and executed by Forrester summarized in this white paper have indicated a quarter of firms are actually using workstations today to run core AI business applications and are  experiencing the benefits that workstations can offer.

insideHPC Guide to HPC/AI for Energy

In this technology guide, we take a deep dive into how the team of Dell Technologies and AMD is working to  provide solutions for a wide array of needs for more strategic cultivation of oil and  gas energy reserves. We’ll start with a series of compelling use-case examples, and then introduce a number  of important pain-points solved with HPC and AI. We’ll continue with some specific solutions for the energy  industry by Dell and AMD. Then we’ll take a look at a case study examining how geophysical services and  equipment company CGG successfully deployed HPC technology for competitive advantage. Finally, we’ll  leave you with a short-list of valuable resources available from Dell to help guide you along the path with HPC and AI.