The release of the latest IO500 ranking, the world’s most authoritative storage performance metric, revealed that the Cheeloo-1 system led the way in the 10-node rankings, thanks to the system’s use of Huawei OceanStor Pacific scale-out storage capabilities. The IO500 ranking evaluates and ranks storage system performance based on bandwidth (GiB/s) and metadata performance (kIOP/s), making it the most influential benchmark in the field of high-performance computing (HPC) storage. The 10-node ranking scales computing and measures storage performance with a high degree of accuracy, by simulating daily applications. This meticulous attention to detail makes the ranking highly relevant to HPC customers.
Huawei OceanStor Pacific Scale-Out Storage Tops IO500 Rankings
Supermicro Delivers World Record Performance

Supermicro’s latest range of H12 Generation A+ Systems and Building Block Solutions®, optimized for AMD EPYC™ processors, offers new levels of application-optimized performance per watt and per dollar, delivering outstanding core density, superior memory bandwidth, and unparalleled I/O capacity.
Gelsinger Speaks: Intel’s New CEO Debuts Today – What Will He Say?

Speculation abounds about Pat Gelsinger’s first public appearance as CEO of Intel at a webinar (5 pm Eastern Time) today that will capture close attention from a host of the company’s core audiences: customers, business partners, employees, industry and financial analysts – and the HPC community. The webinar, confidently called “Intel Unleashed: Engineering the Future,” […]
Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.
The Graphcore Second Generation IPU

Our friends over at Graphcore, the U.K.-based startup that launched the Intelligence Processing Unit (IPU) for AI acceleration in 2018, has released a new whitepaper introducing the IPU-Machine. This second-generation platform has greater processing power, more memory and built-in scalability for handling extremely large parallel processing workloads. This paper explores the new platform and assesses its strengths and weaknesses compared to the growing cadre of potential competitors.
Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.
Workload Portability Enabled by a Modern Storage Platform

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, explores what is meant by “data portability,” and why it’s important. Looking at a customer pipeline, the customer context could be a software defined car, any IoT edge point, a drone, a smart home, a 5G tower, etc. In essence, we’re describing an AI pipeline which runs over an edge, runs over a core, and runs over a cloud. Therefore we have three high-level components for this pipeline.
Massive Scalable Cloud Storage for Cloud Native Applications

In this comprehensive technology white paper, “Massive Scalable Cloud Storage for Cloud Native Applications,”written by Evaluator Group, Inc. on behalf of Red Hat, we delve into OpenShift, a key component of Red Hat’s portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utilized by Red Hat to provide a data plane for Red Hat’s OpenShift environment.
Overcoming the Complexities of New Applications & Technologies in the New Era of HPC

In this contributed article, Bill Wagner, CEO of Bright Computing, discusses how as more organizations take the leap into HPC, Bright Computing aims to be the company that helps solve the challenge of complexity within the industry and replace it with flexibility, ease of use, and accelerated time to value.
insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads
In this insideHPC technology guide, “insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads,”as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.