Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Overcoming Challenges to Deep Learning Infrastructure

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential. There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​

Immersion Cooling for Transportable HPC

In this sponsored post from our friends over at One Stop Systems, Product Marketing Manager, Braden Cooper discusses how the latest high-performance computing systems for AI applications generate more heat than ever before. Datacenters have begun adoption of immersion cooling solutions that immerse the temperature-sensitive electronics in a non-conductive fluid which efficiently dissipates the heat.

How Well-Designed Infrastructure Can Overcome Challenges to Big Data Analytics Workloads

In this sponsored post, our friends over at Silicon Mechanics discuss how using big data analytics and predictive analytics through deep learning (DL) are essential strategies to make smarter, more informed decisions and provide competitive advantages for your organization. But these tactics are not simple to execute, and they require a properly designed hardware infrastructure.

How Aerospace/Defense Can Harness Data with a Well-Designed AI Infrastructure

In this sponsored post, our friends over at Silicon Mechanics discuss how solving mission-critical problems using AI in the aerospace and defense industry is becoming more of a reality. Every day, new technologies emerge that can simplify deployment, management, and scaling of AI infrastructure to ensure long-term ROI. There are several questions to ask yourself to ensure deploying AI workloads, and harnessing the full potential of data, in aerospace/defense is much more plausible and efficient.

Accelerating the Modern Data Center – Gear Up for AI

Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.

Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

Scalable Inferencing for Autonomous Trucking

In this sponsored post, Tim Miller, Vice President, Product Marketing, One Stop Systems, discusses autonomous trucking and that to achieve AI Level 4 (no driver) in the vehicles, powerful AI inference hardware supporting many different inferencing engines operating and coordinating simultaneously is required.

Harvard’s Cannon Supercomputer is Anything but Loose

In this sponsored post, Dr. Scott Yockel, University Research Computing Officer at the Harvard University and Scott Tease, Vice President and General Manager of HPC and AI at Lenovo, discuss how the Cannon Supercomputer’s mission is clear – to be the computational powerhouse behind some of the most groundbreaking research of this world and beyond – from the impact of environmental pollutants on human health to the intricate study of black holes.