Siemens and nVent to Release Liquid Cooling and Power Reference Architecture for AI Data Centers

Siemens and nVent are collaborating to develop a liquid cooling and power reference architecture for hyperscale AI workloads. The new joint architecture developed by Siemens and nVent is designed to help build 100 MW hyperscale AI data centers built to house large-scale, liquid-cooled AI infrastructure, such as NVIDIA GB200 NVL72 systems. It presents a Tier […]

Lenovo Neptune Liquid Cooling Ecosystem Expands: New Compact ThinkSystem V4 Solutions Deliver Massive Efficiency for HPC and AI Workloads

Today, Lenovo expanded its industry-leading Lenovo Neptune liquid-cooling technology to more servers with new ThinkSystem V4 designs that help businesses boost intelligence, consolidate IT and lower power consumption in the new era of AI. Powered by Intel® Xeon® 6 processors with P-cores, the new Lenovo’s ThinkSystem SC750 V4 supercomputing infrastructure combines peak performance with advanced efficiency to deliver faster insights in a space-optimized design for intensive HPC workloads.  

Kickstart Your Business to the Next Level with AI Inferencing

{SPONSORED GUEST ARTICLE] Check out this article form HPE (with NVIDIA.) The need to accelerate AI initiatives is real and widespread across all industries. The ability to integrate and deploy AI inferencing with pre-trained models can reduce development time with scalable secure solutions….

How You Can Use Artificial Intelligence in the Financial Services Industry

In financial services, it is important to gain any competitive advantage. Your competition has access to most of the same data you do, as historical data is available to everyone in your industry. Your advantage comes with the ability to exploit that data better, faster, and more accurately than your competitors. With a rapidly fluctuating market, the ability to process data faster gives you the opportunity to respond quicker than ever before. This is where AI-first intelligence can give you the leg
up.

The Anyscale Platform™, built on Ray, Introduces New Breakthroughs in AI Development, Experimentation and AI Scaling

Anyscale, the company behind Ray open source, the unified compute framework for scaling any machine learning or Python workload, announced several new advancements on the Anyscale Platform™ at AWS re:Invent in Las Vegas, NV. The new capabilities extend beyond the advantages of Ray open source to make AI/ML and Python workload development, experimentation, and scaling even easier for developers.

Exxact Partners with Run:ai to Offer Maximal Utilization in GPU Clusters for AI Workloads

Exxact Corporation; a leading provider of high-performance computing (HPC), artificial intelligence (AI), and data center solutions; now offers Run:ai in their solutions. This groundbreaking Kubernetes-based orchestration tool incorporates an AI-dedicated, high-performant super-scheduler tailored for managing GPU resources in AI clusters.

How Aerospace/Defense Can Harness Data with a Well-Designed AI Infrastructure

In this sponsored post, our friends over at Silicon Mechanics discuss how solving mission-critical problems using AI in the aerospace and defense industry is becoming more of a reality. Every day, new technologies emerge that can simplify deployment, management, and scaling of AI infrastructure to ensure long-term ROI. There are several questions to ask yourself to ensure deploying AI workloads, and harnessing the full potential of data, in aerospace/defense is much more plausible and efficient.

Things to Know When Assessing, Piloting, and Deploying GPUs – Part 3

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.

Things to Know When Assessing, Piloting, and Deploying GPUs – Part 2

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.

Things to Know When Assessing, Piloting, and Deploying GPUs

In this insideHPC Guide, our friends over at WEKA suggest that when organizations decide to move existing applications or new applications to a GPU-influenced system there are many items to consider, such as assessing the new environment’s required components, implementing a pilot program to learn about the system’s future performance, and considering eventual scaling to production levels.