Zero to an ExaFLOP in Under a Second

In this sponsored post, Matthew Ziegler at Lenovo discusses today’s metric for raw speed of compute. Much like racing cars, servers do “time trials” to gauge their performance relative to a given workload.  There are more Spec or Web benchmarks out there for servers than there are racetracks and drag strips.  Perhaps the most important measure is the raw calculating throughput that a system delivers: FLOPS, or Floating-Point Operations Per Second. 

Choosing the Best Data Flow Design for GPU Accelerated Applications

In this sponsored article from our friends over at Supermicro, we discusses how deciding on the correct type of GPU accelerated computation hardware depends on many factors. One particularly important aspect is the data flow patterns across the PCIe bus and between GPUs and Intel® Xeon® Scalable processors. These factors, along with some application insights are explored below.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

Many new technologies used in High Performance Computing (HPC) have allowed new application areas to  become possible. Advances like multi-core, GPU, NVMe, and others have created application verticals that  include accelerator assisted HPC, GPU based Deep Learning, Fast storage and parallel file systems, and Big  Data Analytics systems. In this special insideHPC technology guide sponsored by our friends over at Tyan, we look at practical hardware design strategies for modern HPC workloads.

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.

Video: PCI Express 6.0 Specification to Reach 64 GigaTransfers/sec

In this video, PCI-SIG President and Board Member, Al Yanes, shares and overview of PCI Express 5.0 and 6.0 specifications. “With the PCIe 6.0 specification, PCI-SIG aims to answer the demands of such hot markets as Artificial Intelligence, Machine Learning, networking, communication systems, storage, High-Performance Computing, and more.”

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

World’s First 7nm GPU and Fastest Double Precision PCIe Card

AMD recently announced two new Radeon Instinct compute products including the AMD Radeon Instinct MI60 and Radeon Instinct MI50 accelerators, which are the first GPUs in the world based on the advanced 7nm FinFET process technology. The company has made numerous improvements on these new products, including optimized deep learning operations. This guest post from AMD outlines the key features of its new Radeon Instinct compute product line.