Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Enterprises are generating more data than ever and are constantly searching for applications that reliably compute significant amounts of sensitive information. To accomplish this, organizations require a modernized IT infrastructure. Today, these infrastructures will be required to handle high-performance computing (HPC) or artificial intelligence (AI) workloads. And these kinds of workloads are even more challenging to implement if you rely on edge computing to meet organizational goals.

Invariably, harsh environments have existed for every industry. For example, industries such as aerospace and defense, transportation, and manufacturing often have strict requirements for data infrastructure footprint, environmental safety, power, as well as other criteria for ruggedized equipment. This can all come at a tremendous, never-ending cost.

From a financial perspective, investing in a more robust edge network certainly saves money at the data center bandwidth level, but the approach requires its own hefty expenses. A common problem has always been in solving how to make the edge a multi-tenant environment ensuring custom logic can be executed quickly and reliably. In short, the evolution of edge computing has been difficult and costly.

Benefits of Composable Infrastructure on the Edge

Until recently, the task of moving outside of traditional data centers has relied on costly, inflexible, low performance equipment that, as a result, impedes ROI for businesses. For this reason, many organizations today are choosing composable disaggregated infrastructure (CDI) on the edge to solve this, as well as several other challenges related to edge computing. The benefits of CDI technologies in edge data centers also include supporting constantly changing workload requirements as edge devices and use cases evolve over time.

At Silicon Mechanics, we believe CDI provides a great option for those looking to achieve HPC and AI on the edge. At its core, CDI refers to the use of software and low-latency fabrics to pool hardware resources so they can be dynamically combined to meet shifting workload needs. CDI software is also completely re-configurable on the fly. It gives your organization the ability to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. A flexible system configuration and bare metal performance within physically limited environments is possible with composable infrastructure on the edge.

Achieving Edge AI and HPC with CDI

To successfully deploy HPC and AI edge clusters, your design team needs to understand both accelerated computing and edge form factors. The good news is, there are several solutions which have modified and, in some cases, eliminated the barriers preventing achieving edge AI and HPC. Specifically, NVIDIA’s GPU acceleration is even more efficient than 5G as it allows on-site processing.

Silicon Mechanics addresses a multitude of challenges with the Titania CDI Edge Cluster. Our experts create custom engineered CDI for flexible system configurations complete with NVIDIA 200Gbps, HDR InfiniBand, and bare metal performance within physically limited environments like edge. The Titania cluster is purpose-built to support cloud and accelerated workloads at the edge without the need for virtualization, delivering bare metal performance of HPC, AI, and even ML at the edge.

Resources including GPUs, FPGAs, and NVMe storage are seamlessly connected using PCIe-connected resources, allowing you to scale each element independently. The Titania cluster is ready to deploy into diverse operating environments, with ruggedized and MIL-SPEC rack design options.

Using CDI Technologies to Your Advantage

These emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​

Whether you invest in large conglomerate cloud or in distributed edge devices for your computing needs, networking technology is always a major investment. Edge computing has both its advantages and disadvantages, but most IT experts agree that it isn’t going away, especially with the forecasted growth of 5G.

The experts at Silicon Mechanics build custom reference architecture like Titania for in-field HPC and even deep learning (DL) inference that operate on limited power, in small footprints, and in unique edge conditions. Learn even more about the Silicon Mechanics Titania Edge CDI Cluster here.

 

Leave a Comment

*

Resource Links: