Modern HPC and Big Data Design Strategies for Data Centers

Print Friendly, PDF & Email

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

Data processing today involves analyzing massive amounts of data that may be processed onsite or in the  cloud. There is a convergence of data processing with high performance computing (HPC), artificial intelligence (AI), and deep learning (DL) markets. Systems capable of handling HPC workloads are now used by many businesses and organizations. In addition to workloads run in a data center, organizations may need to process and store data from accelerated workloads typically run on HPC systems. Organizations need an optimized infrastructure architecture to meet a wide variety of processing needs.

Executive Summary

Data processing today involves analyzing massive amounts of data that may be processed onsite or in the  cloud. There is a convergence of data processing with high performance computing (HPC), artificial intelligence (AI), and deep learning (DL) markets. Systems capable of handling HPC workloads are now used by many businesses and organizations. In addition to workloads run in a data center, organizations may need to process and store data from accelerated workloads typically run on HPC systems. Organizations need an optimized infrastructure architecture to meet a wide variety of processing needs.

The Hyperion Research “SC20 HPC Market Results and New Forecasts” report published in November 2020  forecasts global HPC server revenue for 2020 of $11.9 billion. This is a decline from earlier predictions due to  the COVID-19 pandemic. Hyperion CEO Earl C. Joseph indicates, “Countering some of these losses, however,  is some new HPC demand to create systems to combat the virus. We’re also seeing public cloud computing  grow quite a bit.” Recent Hyperion data suggests that the average or traditional HPC user is anticipating  running about 20% of their HPC enabled AI workloads in the cloud in the next year. The increase is due  mainly to access to a variety of hardware and software geared towards AI applications, as well as access to  data that is either stored or collected in the cloud.

The use of HPC, DL, big data workloads and input/output (IO) heavy computing requires comprehensive  hardware solutions to handle the increased processing and storage needs. Advances in processors (single  and multi-core), as well as graphic processing units (GPUs) used in DL, Non-Volatile Memory Express (NVMe),  Double Data Rate (DDR) memory and storage options make HPC processing available to the  enterprise market.

The following Tyan servers with state-of-the-art AMD  EPYC™ processors are capable of meeting the most demanding processing needs of modern workloads.

  • Accelerated HPC Computation Tyan’s Transport HX TN83-B8251 can handle both HPC and Deep  Leaning applications and works well with other GPU accelerated workloads.
  • IO-Heavy HPC Computing Tyan’s Transport CX GC79A-B8252 server provides excellent support for  organizations doing IO-heavy computing with a variety of memory-based computing applications.
  • Big Data (Database) The Tyan Transport CX TN73-B8037-X4S is an ideal solution for big data workloads; it is suited for high-density data center deployment that targets scale-out applications with large numbers of nodes.

Modern HPC Workloads

In the past, general-purpose servers were used in data centers to run normal computer workloads and do  multitasking. HPC, DL, IO-heavy HPC and big data computing workloads all require specific hardware designs.  Greater performance can be achieved by tailoring the server design to maximize features that improve performance specific to the application. Modern HPC-related workloads include:

  • Accelerated HPC Computation Requires simulations and large number crunching calculations. This type  of computing includes both traditional HPC and DL systems.
  • IO-Heavy HPC Computing Requires systems that can read/write and store large amounts of data on  disks. This type of computing includes systems that provide fast NVMe implementations for local IO or  as part of a parallel file system.
  • Big Data (Database) Computing Requires systems that can analyze and extract information from large  or complex data sets. This type of computing includes systems designed for high-density bulk storage  of large amounts of data.

Over the next few weeks we’ll explore Tyan’s new Special Research Report:

  • Executive Summary, Modern HPC Workloads
  • System Design Strategies for Data Center Traditional Servers, Accelerated HPC and Deep Learning Computing
  • IO-Heavy Computing Systems, Big Data Systems
  • Introducing Tyan, Conclusion

Download the complete Modern HPC and Big Data Design Strategies for Data Centers courtesy of Tyan.