Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

Many new technologies used in High Performance Computing (HPC) have allowed new application areas to  become possible. Advances like multi-core, GPU, NVMe, and others have created application verticals that  include accelerator assisted HPC, GPU based Deep Learning, Fast storage and parallel file systems, and Big  Data Analytics systems. In this special insideHPC technology guide sponsored by our friends over at Tyan, we look at practical hardware design strategies for modern HPC workloads.

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.

Video: PCI Express 6.0 Specification to Reach 64 GigaTransfers/sec

In this video, PCI-SIG President and Board Member, Al Yanes, shares and overview of PCI Express 5.0 and 6.0 specifications. “With the PCIe 6.0 specification, PCI-SIG aims to answer the demands of such hot markets as Artificial Intelligence, Machine Learning, networking, communication systems, storage, High-Performance Computing, and more.”

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

World’s First 7nm GPU and Fastest Double Precision PCIe Card

AMD recently announced two new Radeon Instinct compute products including the AMD Radeon Instinct MI60 and Radeon Instinct MI50 accelerators, which are the first GPUs in the world based on the advanced 7nm FinFET process technology. The company has made numerous improvements on these new products, including optimized deep learning operations. This guest post from AMD outlines the key features of its new Radeon Instinct compute product line.

One Stop Systems Steps up GPU Servers for Ai and World’s First PCIe Gen 4 Cable Adapter

In this video from SC18, Jaan Mannik from One Stop Systems describes how the company’s high performance GPU system power HPC and Ai applications. At the show, the company also introduced HIB616-x16, the world’s first PCIe Gen 4 cable adapter. “The OSS booth will also feature a partner pavilion where several OSS partners will be represented, including NVIDIA, SkyScale, Western Digital, Liqid, One Convergence, Intel and Lenovo. OSS and its partners will showcase new products, services and solutions for high-performance computing, including GPU and flash storage expansion, composable infrastructure solutions, the latest EOS server, cloud computing, and the company’s recently introduced Thunderbolt eGPU product.”

Implementing PCIe Gen 4 Expansion

After a long run for PCI Express (PCIe) Gen 3, Gen 4 is fast becoming the latest de facto standard for general purpose I/O of the modern computer system. “The ability to run PCIe over cable at full performance with complete software transparency has opened up a range of new application possibilities over the past decade for CPU to I/O system re-partitioning with expansion systems uniquely situated to take advantage of the new PCIe Gen 4 bandwidth soon available on servers.”

Liqid steps up with Composable Infrastructure for HPC at SC17

In this video, Jay Breakstone and Sumit Puri from Liqid describe the company’s innovative composable infrastructure technology for HPC. “Liqid Grid enables once-static infrastructure to scale on demand to effectively manage the explosion of data associated with cloud, enterprise, HPC and AI, as well as other emerging, high-value, data-intensive applications.”