This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions available for organizations and enterprise customers.
Data processing today involves analyzing massive amounts of data that may be processed onsite or in the cloud. There is a convergence of data processing with high performance computing (HPC), artificial intelligence (AI), and deep learning (DL) markets. Systems capable of handling HPC workloads are now used by many businesses and organizations. In addition to workloads run in a data center, organizations may need to process and store data from accelerated workloads typically run on HPC systems. Organizations need an optimized infrastructure architecture to meet a wide variety of processing needs.
System Design Strategies for Data Center Traditional Servers
Traditionally, data centers relied on cluster systems built from off-the-shelf servers using x86 processors and high speed networks. These systems were central processor unit (CPU)-based with two or more processors, memory channels, and high-speed links. The systems were designed to balance the number of processors (cores), memory, and quality of the interconnect. CPUs function by processing a needed compute task from start to finish. CPU-based systems are used for normal computer workloads and specialize in less parallel applications that need higher clock speeds.
While CPU-based systems work for general data center processing tasks, CPU systems often cannot handle processing needs of HPC, big data and DL because these applications are often considered compute-bound because the amount of computation is the limiting factor in application progress.
“Legacy CPU-based clustered servers may not have expansion capabilities for large numbers of GPU accelerators or high performance NVMe storage,” states Maher.
Accelerated HPC and Deep Learning Computing
HPC workloads consist of simulations that require processing large amounts of floating point calculations to simulate or model complicated processes. HPC simulations were traditionally performed by government, research and educational institutions. However, HPC computing is increasingly used by organizations and businesses. For example, HPC simulations are performed for materials and molecular systems, weather forecast and astronomy, fluid dynamics, financial markets, oil and gas, physics, bioscience, and many other fields.
GPU-based accelerators are popular in HPC computing and are increasingly used to meet simulation and processing performance needs. GPUs specialize in running massively parallel applications and have the advantage of processing a single instruction across large amounts of data at the same time. Maher states, “Applications which perform well on GPUs typically also scale well across multiple GPUs. The more you install, the higher performance your application can reach.”
Deep Learning (DL) and deep neural network (DNN) processing enables enterprises and organizations to process and gain insights from their large volumes of data. DL algorithms perform a task repeatedly and gradually improve the outcome through deep layers that enable progressive learning. DL processing workloads can take thousands of hours of compute processing, so multiple GPUs are often used in DL processing.
Over the next few weeks we’ll explore Tyan’s new Special Research Report:
- Executive Summary, Modern HPC Workloads
- System Design Strategies for Data Center Traditional Servers, Accelerated HPC and Deep Learning Computing
- IO-Heavy Computing Systems, Big Data Systems
- Introducing Tyan, Conclusion
Download the complete Modern HPC and Big Data Design Strategies for Data Centers courtesy of Tyan.