Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intelligent Fabrics for the Next Wave of AI Innovation

In this sponsored post, our friend John Spiers, Chief Strategy Officer at Liqid, discusses how resource utilization and the soaring costs surrounding it are a constant push and pull issue for IT Departments. Now with the emergence of AI and machine learning, resource utilization is far more front and center than it has ever been. Managing legacy hardware in a hyperconverged environment just as you always have is not going to cut it, because the people and hardware costs associated with these extremely heavy workloads are tremendous. Intelligent fabrics and composable infrastructure software deliver a solution to the problem that allows IT providers the ability to pool and deploy their hardware resources to adapt to the workload at hand, then re-deploy as required for a balanced system that can address the demands of AI and machine learning.

Video: GigaIO on Optimizing Compute Resources for ML, HPDA and other Advanced Workloads

In this interview, GigaIO CEO Alan Benjamin talks about systems performance problems and wasted compute resources when implementing ML, HPDA and other high demand workloads that involve high data volumes. At issue, Benjamin explains, is today’s rack architecture, which is decades old and unsuited for combinations of CPUs, GPUs and other accelerators needed for advanced computing strategies. The answer: the “composable disaggregated infrastructure.”

Composable Computing at SDSC

In this Q&A, SDSC Chief Data Science Officer Ilkay Altintas explains the rationale for composable systems and the approach taken with the new Expanse supercomputer. With its new Expanse supercomputer, San Diego Supercomputer Center (SDSC) is pioneering composable HPC systems to enable the dynamic allocation of resources tailored to individual workloads. One of the critical innovations in the SDSC’s new Expanse supercomputer from Dell Technologies, is the ability to support composable systems with dynamic capabilities.

vScaler Integrates SLURM with GigaIO FabreX for Elastic HPC Cloud Device Scaling

Open source private HPC cloud specialist vScaler today announced the integration of SLURM workload manager with GigaIO’s FabreX for elastic scaling of PCI devices and HPC disaggregation. FabreX, which GigaIO describes as the “first in-memory network,” supports vScaler’s private cloud appliances for such workloads such as deep learning, biotechnology and big data analytics. vScaler’s disaggregated […]

LIqid Steps up with Composable Infrastructure at SC19

In this video from SC19, Sumit Puri from Liqid describes the company’s innovative composable infrastructure technology for HPC. “We don’t build servers statically. We build servers dynamically by taking software and reconfiguring servers on the fly to have any amount of storage, GPU, networking, or compute that the application layer requires. Our mission is to turn the data center from statically configured to dynamically configurable.”

Liqid Enables Multi-Fabric Support for Composable Infrastructure

Today Liqid announced unified multi-fabric support for composability across all major fabric types including PCIe Gen 3, PCIe Gen 4, Ethernet, Infiniband, and laying the foundation for the up-coming Gen-Z specifications. “Providing Ethernet and Infiniband composability in addition to PCIe is a natural extension of our expertise in fabric management and aligns with our mission to facilitate data center disaggregation,” said Sumit Puri, CEO and Co-founder, Liqid.

One Stop Systems Showcases Composable Infrastructure for GPU Workloads at ISC 2018

In this video from ISC 2018, Jaan Mannik from One Stop Systems describes the company’s HPC systems and new composable infrastructure solutions. OneStop also showcased a wide array of its high-density NVIDIA GPU-based appliances, as well as showcase a live remote connection to one of its machine learning and HPC platforms. “OSS leads the market in external systems that increase a server’s performance in HPC applications, reducing cost and impact on data center infrastructure. These technology-hungry applications include AI (artificial intelligence), deep learning, seismic exploration, financial modeling, media and entertainment, security and defense.”

Video: Liqid Teams with Inspur at GTC for Composable Infrastructure

In this video from GTC 2018, Dolly Wu from Inspur and Marius Tudor from Liquid describe how the two companies are collaborating on Composable Infrastructure for AI and Deep Learning workloads. “AI and deep learning applications will determine the direction of next-generation infrastructure design, and we believe dynamically composing GPUs will be central to these emerging platforms,” said Dolly Wu, GM and VP Inspur Systems.

Rack Scale Composable Infrastructure for Mixed Workload Data Centers

A more flexible, application-centric, datacenter architecture is required to meet the needs of rapidly changing HPC applications and hardware. In this guest post, Katie Rivera of One Stop Systems explores how rack-scale composable infrastructure can be utilized for mixed workload data centers. 

Liqid and Inspur team up for Composable GPU-Centric Rack-Scale Solutions

Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. “Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,” said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.”