Modern HPC and Big Data Design Strategies for Data Centers – Part 3

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Modern HPC and Big Data Design Strategies for Data Centers – Part 2

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

QPM Address Medical Life Sciences Challenges

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), review how QCT brings out the concept called QCT Platform on Demand (QCT POD), which is a converged framework with a flexible infrastructure for customers running different workloads. Under this concept, QCT develops the QCT POD for Medical (QPM) that is an on-premise rack-level system with common building blocks designed to provide greater flexibility and scalability, aimed to meet different medical workload demands using HPC and DL technologies, including Next Generation Sequencing (NGS), Molecular Dynamics (MD), and Medical Image Recognition.

Modern HPC and Big Data Design Strategies for Data Centers

This insideHPC Special Research Report provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

University of Stuttgart’s Hawk HPC System to Go CPU-GPU for Deep Learning Workloads

Add the High Performance Computing Center at the University of Stuttgart (HLRS) to the list of supercomputing organizations going from CPU-only to CPU-GPU architectures. HLRS announced this morning it will add Nvidia graphic processing units to its Hawk supercomputer, a Hewlett Packard Enterprise Apollo system installed last February. One of Europe’s most powerful HPC systems, […]

Deep Learning GPU Cluster

In this whitepaper, “Deep Learning GPU Cluster,” our friends over at Lambda walk you through the Lambda Echelon multi-node cluster reference design: a node design, a rack design, and an entire cluster level architecture. This document is for technical decision-makers and engineers. You’ll learn about the Echelon’s compute, storage, networking,  power distribution, and thermal design. This is not a cluster administration handbook, this is a high level technical overview of one possible system architecture.

Why Developers are Turning to Ultra-powerful Workstations for More Creative Freedom at Less Cost

This white paper, “Why Developers are Turning to Ultra-powerful Workstations for More Creative Freedom at Less Cost,” from Dell Technologies discusses why developers are turning to ultra-powerful workstations for more creative freedom at less cost. Research shows that large and small companies alike are using  powerful workstations with even more powerful graphic processing units (GPUs) as integral parts of their artificial intelligence infrastructure.

insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads

Not too long ago, building a converged HPC/AI environment – with two domains: High Performance Computing (HPC) and Artificial Intelligence (AI) – would require spending a lot of money on proprietary systems and software with the hope that it would scale as business demands changed. In this insideHPC technology guide, as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.