Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.

The Graphcore Second Generation IPU

Our friends over at Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has released a new whitepaper introducing the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. This paper explores the new platform and assesses its strengths and weaknesses compared to the growing cadre of potential competitors.

Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Workload Portability Enabled by a Modern Storage Platform

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, explores what is meant by “data portability,” and why it’s important. Looking at a customer pipeline, the customer context could be a software defined car, any IoT edge point, a drone, a smart home, a 5G tower, etc. In essence, we’re describing an AI pipeline which runs over an edge, runs over a core, and runs over a cloud. Therefore we have three high-level components for this pipeline.

Modern HPC and Big Data Design Strategies for Data Centers – Part 2

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

QPM Address Medical Life Sciences Challenges

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), review how QCT brings out the concept called QCT Platform on Demand (QCT POD), which is a converged framework with a flexible infrastructure for customers running different workloads. Under this concept, QCT develops the QCT POD for Medical (QPM) that is an on-premise rack-level system with common building blocks designed to provide greater flexibility and scalability, aimed to meet different medical workload demands using HPC and DL technologies, including Next Generation Sequencing (NGS), Molecular Dynamics (MD), and Medical Image Recognition.

Massive Scalable Cloud Storage for Cloud Native Applications

In this comprehensive technology white paper, “Massive Scalable Cloud Storage for Cloud Native Applications,”written by Evaluator Group, Inc. on behalf of Red Hat, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utilized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.

Unleash the Future of Innovation with HPC & AI

This whitepaper, “Unleash the Future of Innovation with HPC & AI,” reviews how cutting-edge solutions from Supermicro and NVIDIA are enabling customers to transform and capitalize on HPC and AI innovation. Data is the driving force for success in the global marketplace. Data volumes are erupting in size and complexity as organizations work to collect, analyze, and derive intelligence from a growing number of sources and devices. These workloads are critical to powering applications that translate insight into business value.

insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads – Part 4

In this insideHPC technology guide, “insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads,”as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.