Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Zero to an ExaFLOP in Under a Second

In this sponsored post, Matthew Ziegler at Lenovo discusses today’s metric for raw speed of compute. Much like racing cars, servers do “time trials” to gauge their performance relative to a given workload.  There are more Spec or Web benchmarks out there for servers than there are racetracks and drag strips.  Perhaps the most important measure is the raw calculating throughput that a system delivers: FLOPS, or Floating-Point Operations Per Second. 

Choosing the Best Data Flow Design for GPU Accelerated Applications

In this sponsored article from our friends over at Supermicro, we discusses how deciding on the correct type of GPU accelerated computation hardware depends on many factors. One particularly important aspect is the data flow patterns across the PCIe bus and between GPUs and Intel® Xeon® Scalable processors. These factors, along with some application insights are explored below.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Transform Your Business with the Next Generation of Accelerated Computing

In this white paper, you’ll find a compelling discussion regarding how Supermicro servers optimized for NVIDIA A100 GPUs are solving the world’s greatest HPC and AI challenges. As the expansion of HPC and AI poses mounting challenges to IT environments, Supermicro and NVIDIA are equipping organizations for success, with world-class solutions to empower business transformation. The Supermicro team is continually testing and validating advanced hardware featuring optimized software components to support a rising number of use cases.

Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Lenovo Offers Optimal Storage Platform for Intel DAOS

In this sponsored post, our friends over at Lenovo and Intel highlight how Lenovo is doing some exciting stuff with Intel’s DAOS software. DAOS, or Distributed Asynchronous Object Storage, is a scale-out HPC storage stack that uses the object storage paradigm to bypass some of the limitations of traditional parallel file system architectures.

Why HPC and AI Workloads are Moving to the Cloud

This sponsored post from our friends over at Dell Technologies discusses a study by Hyperion Research finds that approximately 20 percent of HPC workloads are now running in the public cloud. There are many good reasons for this trend.

Taking Virtualization to a Higher Level at the University of Pisa

In this sponsored post, our friends over at Dell Technologies highlight a compelling case study: the University of Pisa gains greater flexibility and value from its IT infrastructure with widespread virtualization of resources, including high performance computing systems.

Composable Supercomputing Optimizes Hardware for AI-driven Data Calculation

In this sponsored post, our friend John Spiers, Chief Strategy Officer at Liqid, discusses how composable disaggregated infrastructure (CDI) solutions are emerging as a solution to roadblocks to advancing the mission of high-performance computing. CDI orchestration software dynamically composes GPUs, NVMe SSDs, FPGA, networking, and storage-class memory to create software-defined bare metal servers on demand. This enables unparalleled resource utilization to deliver previously impossible performance for AI-driven data analytics.