Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GigaIO and Microchip Power Native PCI Express Network Fabric for Composable Disaggregated Infrastructure

Carlsbad, CA – January 17, 2021 – GigaIO, maker of data center network architecture and connectivity solutions, has announced a collaboration with Microchip Technology Inc. to power GigaIO’s FabreX, the native PCI Express (PCIe) Gen4 network fabric, which supports GDR, MPI, TCP/IP and NVMe-oF. FabreX technology revolutionizes rack-scale architectures, enabling software-defined, dynamically reconfigurable rack-scale systems, […]

Microchip Announces PCI Express 5.0 and CXLTM 2.0 Retimers

Chandler, AZ, November 10, 2020 – Microchip Technology Inc. announced today the XpressConnect family of low-latency PCI Express (PCIe®) 5.0 and Compute Express Link™ (CXLTM) 1.1/2.0 retimers. XpressConnect retimers triple the reach of PCIe Gen 5 electrical signals, which enables data center equipment providers to harness the next generational advancement in compute IO performance while […]

AI Workflow Scalability through Expansion

In this special guest feature, Tim Miller, Braden Cooper, Product Marketing Manager at One Stop Systems (OSS), suggests that for AI inferencing platforms, the data must be processed in real time to make the split-second decisions that are required to maximize effectiveness.  Without compromising the size of the data set, the best way to scale the model training speed is to add modular data processing nodes.

Zero to an ExaFLOP in Under a Second

In this sponsored post, Matthew Ziegler at Lenovo discusses today’s metric for raw speed of compute. Much like racing cars, servers do “time trials” to gauge their performance relative to a given workload.  There are more Spec or Web benchmarks out there for servers than there are racetracks and drag strips.  Perhaps the most important measure is the raw calculating throughput that a system delivers: FLOPS, or Floating-Point Operations Per Second. 

Choosing the Best Data Flow Design for GPU Accelerated Applications

In this sponsored article from our friends over at Supermicro, we discusses how deciding on the correct type of GPU accelerated computation hardware depends on many factors. One particularly important aspect is the data flow patterns across the PCIe bus and between GPUs and Intel® Xeon® Scalable processors. These factors, along with some application insights are explored below.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

Many new technologies used in High Performance Computing (HPC) have allowed new application areas to  become possible. Advances like multi-core, GPU, NVMe, and others have created application verticals that  include accelerator assisted HPC, GPU based Deep Learning, Fast storage and parallel file systems, and Big  Data Analytics systems. In this special insideHPC technology guide sponsored by our friends over at Tyan, we look at practical hardware design strategies for modern HPC workloads.

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.