Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Relief for the Solution Architect: Pushing Back on HPC Cluster Complexity with Warewulf and Apptainer

[SPONSORED CONTENT]  How did you, at heart and by training a research scientist, financial analyst or product design engineer doing multi-physics CAE, how did you end up as a… systems administrator? You set out to be one thing and became something else entirely. You finished school and began working with some hefty HPC-class clusters. One […]

Overcoming Challenges to Deep Learning Infrastructure

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential. There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​

Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research

oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.

Supercharging Modern Data Centers with NVIDIA Networking Solutions

In this webinar sponsored by PNY Technologies, you will learn the benefits, functionalities, and key features of Accelerated Ethernet technology by NVIDIA Spectrum and how it delivers end-to-end innovations and synergies to optimize modern applications from core to cloud to edge.

How Well-Designed Infrastructure Can Overcome Challenges to Big Data Analytics Workloads

In this sponsored post, our friends over at Silicon Mechanics discuss how using big data analytics and predictive analytics through deep learning (DL) are essential strategies to make smarter, more informed decisions and provide competitive advantages for your organization. But these tactics are not simple to execute, and they require a properly designed hardware infrastructure.

Supermicro Announces 8U ‘Universal GPU’ Server for NVIDIA H100’s 

HPC-AI server maker Supermicro today announced what the company said is its most advanced GPU server incorporating eight NVIDIA H100 Tensor Core GPUs. Supermicro now offers three Universal GPU servers: the 4U, 5U and the new 8U. The Universal GPU platforms also support Intel and AMD CPUs up to 400W, 350W and higher, according to […]

How Aerospace/Defense Can Harness Data with a Well-Designed AI Infrastructure

In this sponsored post, our friends over at Silicon Mechanics discuss how solving mission-critical problems using AI in the aerospace and defense industry is becoming more of a reality. Every day, new technologies emerge that can simplify deployment, management, and scaling of AI infrastructure to ensure long-term ROI. There are several questions to ask yourself to ensure deploying AI workloads, and harnessing the full potential of data, in aerospace/defense is much more plausible and efficient.

Accelerating the Modern Data Center – Gear Up for AI

Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.