Nvidia AI Computing by HPE Portfolio Announced for Generative AI

Print Friendly, PDF & Email

On a day that Nvidia climbed over Microsoft as the most valuable U.S. company measured by stock valuation, Nvidia and Hewlett Packard Enterprise today announced Nvidia AI Computing by HPE, a portfolio of co-developed AI solutions for enterprises to accelerate adoption of generative AI.

Included is HPE Private Cloud AI, which HPE said is a first-of-its-kind solution with deep integration of Nvidia AI computing, networking and software with HPE’s AI storage, compute and the HPE GreenLake cloud.

It’s designed to enable enterprises to sustainably develop generative AI applications. “Powered by the new OpsRamp AI copilot that helps IT operations improve workload and IT efficiency, HPE Private Cloud AI includes a self-service cloud experience with full lifecycle management and is available in four right-sized configurations to support a broad range of AI workloads and use cases,” HPE said.

HPE Private Cloud AI includes support for inference and RAG AI workloads that utilize proprietary data, control for data privacy, security, transparency, and governance requirements, and ITOps and AIOps capabilities to increase productivity.

The product suite will be available through a joint go-to-market strategy comprised of sales teams and channel partners, training and a global network of system integrators — including Deloitte, HCLTech, Infosys, TCS and Wipro.

Antonio Neri, HPE

The AI and data software stack starts with the Nvidia AI Enterprise software platform that includes Nvidia NIM inference microservices. Nvidia AI Enterprise is designed to accelerate data science pipelines and streamline development of copilots and other GenAI applications. Nvidia NIM is designed to be easy-to-use microservices for AI model inferencing for transitioning from prototype to deployment.

In addition, HPE AI Essentials software offers a set of AI and data foundation tools with a control plane for enterprise support, and AI services, such as data and model compliance and features designed to ensure that AI pipelines “are in compliance, explainable and reproducible throughout the AI lifecycle,” HPE said.

HPE Private Cloud AI also delivers an AI infrastructure stack that includes Nvidia Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers with support for Nvidia L40S, H100 GPUs and the Nvidia GH200 NVL2 platform.

Announced during the HPE Discover keynote by HPE President and CEO Antonio Neri, who was joined by NVIDIA founder and CEO Jensen Huang, NVIDIA AI Computing by HPE marks the expansion of a decades-long partnership and reflects the substantial commitment of time and resources from each company.

Jensen Huang, Nvidia

“Generative AI holds immense potential for enterprise transformation, but the complexities of fragmented AI technology contain too many risks and barriers that hamper large-scale enterprise adoption and can jeopardize a company’s most valuable asset – its proprietary data,” said Neri. “To unleash the immense potential of generative AI in the enterprise, HPE and NVIDIA co-developed a turnkey private cloud for AI that will enable enterprises to focus their resources on developing new AI use cases that can boost productivity and unlock new revenue streams.”

“Generative AI and accelerated computing are fueling a fundamental transformation as every industry races to join the industrial revolution,” said Huang. “Never before have NVIDIA and HPE integrated our technologies so deeply – combining the entire NVIDIA AI computing stack along with HPE’s private cloud technology – to equip enterprise clients and AI professionals with the most advanced computing infrastructure and services to expand the frontier of AI.”

HPE Private Cloud AI offers a self-service cloud experience enabled by HPE GreenLake cloud. Through a single, platform-based control plane, HPE Greenlake cloud services provide manageability and observability to automate, orchestrate and manage endpoints, workloads, and data across hybrid environments. This includes sustainability metrics for workloads and endpoints.

OpsRamp’s IT operations are integrated with HPE GreenLake cloud for observability and AIOps. OpsRamp now provides observability for the end- to- end Nvidia accelerated computing stack, including Nvidia NIM and AI software, NVIDIA Tensor Core GPUs, and AI clusters as well as NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet switches. IT administrators can gain insights to identify anomalies and monitor their AI infrastructure and workloads across hybrid, multi-cloud environments.

The OpsRamp operations copilot utilizes Nvidi’s accelerated computing platform to analyze datasets for insights with a conversational assistant. OpsRamp will also integrate with CrowdStrike APIs for a unified service map view of endpoint security.

HPE said Deloitte, HCLTech, Infosys, TCS and WIPRO announced support of the HPE-Nvidia AI portfolio and HPE Private Cloud AI.

HPE also announced support for Nvidia’s latest GPUs and CPUs, including:

  • HPE Cray XD670 supports eight Nvidia H200 NVL Tensor Core GPUs, which HPE said “is ideal for LLM builders.”
  • HPE ProLiant DL384 Gen12 server with Nvidia GH200 NVL2, suited for LLM consumers using larger models or RAG.
  • HPE ProLiant DL380a Gen12 server support for up to eight NVIDIA H200 NVL Tensor Core GPUs for LLM users seeking to scale GenAI workloads.
  • HPE will be time-to-market to support the NvidiaVIDIA GB200 NVL72 / NVL2, as well as the new NVIDIA Blackwell, NVIDIA Rubin, and NVIDIA Vera architectures.

HPE said its GreenLake for File Storage has achieved Nvidia DGX BasePOD certification and Nvidia OVX storage validation, for enterprise file storage for AI, GenAI and GPU-intensive workloads at scale. HPE will be a time-to-market partner on upcoming Nvidia reference architecture storage certification programs.

Regarding availability, HPE said:

  • HPE Private Cloud AI is expected to be generally available in the fall.
  • HPE ProLiant DL380a Gen12 server with NVIDIA H200 NVL Tensor Core GPUs is expected to be generally available in the fall.
  • HPE ProLiant DL384 Gen12 server with dual NVIDIA GH200 NVL2 is expected to be generally available in the fall.
  • HPE Cray XD670 server with NVIDIA H200 NVL is expected to be generally available in the summer.

Speak Your Mind

*