Modern HPC and Big Data Design Strategies for Data Centers – Part 3

Print Friendly, PDF & Email

This insideHPC Special Research Report, “Modern HPC and Big Data Design Strategies for Data Centers,” provides an overview of what to consider when selecting an infrastructure capable of meeting the new workload processing needs. Tyan has a wide range of bare bones server and storage hardware solutions  available for organizations and enterprise customers.

Data processing today involves analyzing massive amounts of data that may be processed onsite or in the  cloud. There is a convergence of data processing with high performance computing (HPC), artificial intelligence (AI), and deep learning (DL) markets. Systems capable of handling HPC workloads are now used by many businesses and organizations. In addition to workloads run in a data center, organizations may need to process and store data from accelerated workloads typically run on HPC systems. Organizations need an optimized infrastructure architecture to meet a wide variety of processing needs.

IO-Heavy Computing Systems

IO-heavy computing requires systems that can efficiently read/write and store large amounts of data on  disks. The storage device is usually either physical disks or SSDs. It is possible for a system to become IO  Bound if the rate at which a process progresses is limited by the speed of the IO subsystem. Database  applications are commonly IO bound as the application waits for data to be fetched from disk before  calculations are performed.

One solution to IO-bound bottlenecks is use of NVMe which offers much higher input/output operations per second (IOPS) performance than previous SATA storage devices, including SATA SSDs. NVMe is an open  logical-device interface specification for accessing non-volatile storage media attached via a PCI Express  (PCIe) bus.

For example, PCIe 3.0 x4 refers to a Gen 3 expansion card or slot with a four-lane configuration. PCIe 4.0 x16  refers to a Gen 4 expansion card or slot with a 16-lane configuration. Each new PCI Express generation  doubles the amount of bandwidth each slot configuration can support.

Applications with heavy ready/write storage requirements can benefit from using NVMe storage where  storage is directly accessible to the processor and does not have to transverse a network.

Big Data Systems

Processing and storing big data involve analyzing, systematically extracting information from, and storing  data sets that are too large or complex to be dealt with by traditional data-processing application software.  Typically, organizations use predictive analytics, user behavior analytics, or certain other advanced data  analytics methods to extract value from data and then store the data.

Organizations require systems with high processing speed and bulk storage such as spinning disks to process  and store this information. It is important to select a system designed for high-density bulk storage  of large amounts of data.

Over the next few weeks we’ll explore Tyan’s new Special Research Report:

Download the complete Modern HPC and Big Data Design Strategies for Data Centers courtesy of Tyan.