Oct. 1, 2024 — IBM announced the availability of NVIDIA H100 Tensor Core GPU instances on IBM Cloud. It joins IBM Cloud’s existing lineup of accelerated computing offerings for enterprise’s AI implementation.
IBM said that despite increasing interest in generative AI, only 39 percent of organizations surveyed say they are currently implementing or operating gen AI for innovation and research. Sixty-nine percent of organizations expect to use generative AI for open innovation by 2025—up from 29 percent in 2022.
Clients looking to implement AI can apply IBM’s watsonx AI studio, data lakehouse and governance toolkit.
Last year, IBM made NVIDIA A100 Tensor Core GPUs available through IBM Cloudvia the watsonx platform, or as GPUaaS for custom needs.
NVIDIA said the new H100 Tensor Core GPU can enable up to 30X faster inference performance over the A100. It has the potential to give IBM Cloud customers a range of processing capabilities while also addressing the cost of enterprise-wide AI tuning and inferencing. Businesses can start small, training small-scale models, fine-tuning models, or deploying applications like chatbots, natural language search, and using forecasting tools using NVIDIA L40S and L4 Tensor Core GPUs. As their needs grow, IBM Cloud customers can adjust their spend accordingly, eventually harnessing the H100 for the most demanding AI and HPC use cases.
By offering direct access to these NVIDIA GPUs on IBM Cloud, in VPC and managed Red Hat OpenShift environments, we are driving easier transformation to gain a competitive advantage with generative AI. Combined with watsonx for building AI models and managing data complexity and governance, as well as our tools for security, enterprises of all sizes have the potential to tackle the challenges of scaling AI.
IBM Cloud offers a comprehensive platform for enterprises to build custom AI applications, manage their data, and support security and compliance initiatives.
- Data privacy, security and compliance support. IBM Cloud applies multi-level security protocols designed to protect AI and HPC processes and guard against data leakage and data privacy concerns. It also includes built-in controls to establish infrastructure and data guardrails for AI workloads.
- AI model governance. Operationalizing AI efficiently is not possible without end-to-end AI lifecycle tracking using automated processes for clarity, monitoring and cataloging. Built on the IBM® watsonx AI and data platform, watsonx.governance helps direct and manage organizations’ AI activities and monitors them for quality and drift. Regulators and auditors can get access to documentation that provides explanations of the model’s behavior and predictions.
- Deployment automation. IBM Cloud automates the deployment of AI-powered applications, to help address the time and errors that could occur with manual configuration. It also provides essential services such as AI lifecycle management solutions, serverless platform, storage, security and solutions to help clients monitor their compliance.
The NVIDIA 100 Tensor Core GPU instances on IBM Cloud are now available in our multi-zone regions (MZRs) in North America, Latin America, Europe, Japan and Australia.