Nvidia today announced its AI Foundry service and NIM inference microservices for generative AI with Meta’s Llama 3.1 collection of models, also introduced today.
The company said its AI Foundry allows organizaations to create custom “supermodels” for their domain-specific industry use cases using Llama 3.1 and Nvidia software and computing. Enterprises can train these models with proprietary data as well as synthetic data generated from Llama 3.1 405B and the Nvidia Nemotron Reward model.
The AI Foundry is powered by the Nvidia DGX Cloud AI platform to give enterprises compute resources that scale as AI demands change.
Nvidia said the offerings come as enterprises and nations developing sovereign AI strategies want to build custom large language models with domain-specific knowledge for genAI applications that reflect their business or culture.
“Meta’s openly available Llama 3.1 models mark a pivotal moment for the adoption of generative AI within the world’s enterprises,” said Jensen Huang, founder and CEO of NVIDIA. “Llama 3.1 opens the floodgates for every enterprise and industry to build state-of-the-art generative AI applications. NVIDIA AI Foundry has integrated Llama 3.1 throughout and is ready to help enterprises build and deploy custom Llama supermodels.”
“The new Llama 3.1 models are a super-important step for open source AI,” said Mark Zuckerberg, founder and CEO of Meta. “With NVIDIA AI Foundry, companies can easily create and customize the state-of-the-art AI services people want and deploy them with NVIDIA NIM. I’m excited to get this in people’s hands.”
To support enterprise deployments of Llama 3.1 models for production AI, Nvidia NIM inference microservices for Llama 3.1 models are available for download from ai.nvidia.com.
Enterprises can pair Llama 3.1 NIM microservices with Nvidia NeMo Retriever NIM microservices to create state-of-the-art retrieval pipelines for AI copilots, assistants and digital human avatars.
Professional services firm Accenture is first to adopt Nvidia AI Foundry to build Llama 3.1 models using the Accenture AI Refinery framework, both for its own use as well as for clients seeking to deploy generative AI applications that reflect their culture, languages and industries.
“The world’s leading enterprises see how generative AI is transforming every industry and are eager to deploy applications powered by custom models,” said Julie Sweet, chair and CEO of Accenture. “Accenture has been working with NVIDIA NIM inference microservices for our internal AI applications, and now, using NVIDIA AI Foundry, we can help clients quickly create and deploy custom Llama 3.1 models to power transformative AI applications for their own business priorities.”
Nvidia AI Foundry provides an end-to-end service for quickly building custom supermodels. It combines Nvidia software, infrastructure and expertise with open community models, technology and support from the Nvidia AI ecosystem.
With NVIDIA AI Foundry, enterprises can create models using Llama 3.1 models and the NVIDIA NeMo platform — including the Nvidia Nemotron-4 340B Reward model, ranked first on the Hugging Face RewardBench.
Once custom models are created, enterprises can create Nvidia NIM inference microservices to run them in production using their preferred MLOps and AIOps platforms on their preferred cloud platforms and Nvidia-Certified Systems from global server manufacturers.
Nvidia AI Enterprise experts and global system integrator partners work with AI Foundry customers to accelerate the entire process, from development to deployment.
Model Customization
Enterprises that need additional training data for creating a domain-specific model can use Llama 3.1 405B and Nemotron-4 340B together to generate synthetic data to boost model accuracy when creating custom Llama supermodels.
Customers that have their own training data can customize Llama 3.1 models with Nvidia NeMo for domain-adaptive pretraining, or DAPT, to further increase model accuracy.
Nvidia and Meta have also teamed to provide a distillation recipe for Llama 3.1 that developers can use to build smaller custom Llama 3.1 models for generative AI applications. This enables enterprises to run Llama-powered AI applications on a broader range of accelerated infrastructure, such as AI workstations and laptops.
Nvidia said that among the first to access the NIM microservices for Llama 3.1 are Aramco, AT&T and Uber. Trained on more than 16,000 Nvidia H100 Tensor Core GPUs and optimized for Nvidia accelerated computing and software — in the data center, in the cloud and locally on workstations with Nvidia RTX GPUs or PCs with GeForce RTX GPUs — the Llama 3.1 collection of multilingual LLMs is a collection of generative AI models in 8B-, 70B- and 405B-parameter sizes.
NeMo Retriever RAG Microservices
Nvidia NeMo Retriever NIM inference microservices for retrieval-augmented generation (RAG) is designed to let organizations enhance response accuracy when deploying customized Llama supermodels and Llama NIM microservices in production.
Combined with Nvidia NIM inference microservices for Llama 3.1 405B, NeMo Retriever NIM microservices deliver the highest open and commercial text Q&A retrieval accuracy for RAG pipelines.
Hundreds of Nvidia NIM partners providing enterprise, data and infrastructure platforms can now integrate the microservices in their AI solutions to supercharge generative AI for the NVIDIA community of more than 5 million developers and 19,000 startups.
Production support for Llama 3.1 NIM and NeMo Retriever NIM microservices is available through Nvidia AI Enterprise. Members of the Nvidia Developer Program will soon be able to access NIM microservices for free for research, development and testing on their preferred infrastructure, Nvidia said.