NVIDIA AI-ready servers will include NVIDIA L40S GPUs, NVIDIA BlueField-3 DPUs and NVIDIA AI Enterprise software to enable enterprises to fine-tune generative AI foundation models and deploy generative AI applications like intelligent chatbots, search and summarization tools. These servers also provide NVIDIA-accelerated infrastructure and software to power VMware Private AI Foundation with NVIDIA.

NVIDIA L40S-powered servers from leading global system manufacturers — Dell Technologies, Hewlett Packard Enterprise and Lenovo — will be available by year-end to accelerate enterprise AI.

“A new computing era has begun,” said Jensen Huang, founder and CEO of NVIDIA. “Companies in every industry are racing to adopt generative AI. With our ecosystem of world-leading software and system partners, we are bringing generative AI to the world’s enterprises.”

NVIDIA AI-ready servers are an ideal platform for businesses that will deploy VMware Private AI Foundation with NVIDIA.

“Generative AI is supercharging digital transformation, and enterprises need a fully integrated solution to more securely build applications that enable them to advance their business,” said Raghu Raghuram, CEO of VMware. “Through the combined expertise of VMware, NVIDIA and our server manufacturer partners, businesses will be able to develop and deploy AI with data privacy, security and control.”

NVIDIA AI-ready servers are designed to provide full-stack accelerated infrastructure and software for industries racing to adopt generative AI for a broad range of applications, including drug discovery, retail product descriptions, intelligent virtual assistants, manufacturing simulation and fraud detection.

The servers feature NVIDIA AI Enterprise, the operating system of the NVIDIA AI platform. The software provides production-ready enterprise support and security for over 100 frameworks, pretrained models, toolkits and software, including NVIDIA NeMo for LLMs, NVIDIA Modulus for simulations, NVIDIA RAPIDS for data science and NVIDIA Triton Inference Server for production AI.

Built to handle complex AI workloads with billions of parameters, L40S GPUs include fourth-generation Tensor Cores and an FP8 Transformer Engine, delivering over 1.45 petaflops of tensor processing power and up to 1.7x training performance compared with the NVIDIA A100 Tensor Core GPU.

For generative AI applications such as intelligent chatbots, assistants, search and summarization, the NVIDIA L40S enables up to 1.2x more generative AI inference performance than the NVIDIA A100 GPU.

Integrating NVIDIA BlueField DPUs drives further speedups by accelerating, offloading and isolating the tremendous compute load of virtualization, networking, storage, security and other cloud-native AI services.

NVIDIA ConnectX-7 SmartNICs offer advanced hardware offloads and ultra-low latency, delivering best-in-class, scalable performance for data-intensive generative AI workloads.