Arm and Meta announced the extension of their partnership to scale AI across multiple layers of compute – spanning AI software and data center infrastructure.
The multi-year partnership builds on the ongoing hardware and software co-design efforts between the two companies, combining Arm AI chips with Meta’s AI-driven products, infrastructure, and open technologies.
“From milliwatt-scale devices powering on-device intelligence to megawatt-scale systems training the world’s most advanced AI models, the collaboration will enable AI across multiple types of compute, workload, and experiences that power Meta’s global platforms,” the companies said.
“From the experiences on our platforms to the devices we build, AI is transforming how people connect and create. Partnering with Arm enables us to efficiently scale that innovation to the more than 3 billion people who use Meta’s apps and technologies.” Santosh Janardhan, Head of Infrastructure, Meta
Meta’s AI ranking and recommendation systems will utilize Arm’s Neoverse-based data center platforms “to deliver higher performance and lower power consumption compared to x86 systems,” Meta said. Arm Neoverse will also allow Meta to achieve performance-per-watt parity, Meta said.
“AI’s next era will be defined by delivering efficiency at scale. Partnering with Meta, we’re uniting Arm’s performance-per-watt leadership with Meta’s AI innovation to bring smarter, more efficient intelligence everywhere — from milliwatts to megawatts.” Rene Haas, CEO, Arm
The companies worked to optimize Meta’s AI infrastructure software stack – from compilers and libraries to AI frameworks – for Arm architectures. This includes the joint tuning of open source components, such as Facebook GEneral Matrix Multiplication (FBGEMM) and PyTorch, using Arm’s vector extensions and performance libraries, producing “measurable gains” in inference efficiency and throughput, the companies said. These optimizations are being contributed back to the open source community, to broaden their impact across the global AI ecosystem.
The partnership includes collaboration on AI software optimizations across the PyTorch machine learning framework, the ExecuTorch edge-inference runtime engine, and the vLLM datacenter-inference engine, and looks to further improve on the foundation of Executorch now optimized with Arm KleidiAI, designed to improve efficiency. Jointly, the collaboration will accelerate the ease of model deployment and increase performance of AI applications from edge to cloud, the companies said.
These open source technology projects are central to Meta’s AI strategy – enabling the development and deployment of everything from recommendations to conversational intelligence. Both companies intend to continue extending future optimizations to these open source projects, enabling millions of developers worldwide to build and deploy efficient AI everywhere on Arm.
From megawatt-scale data centers to foundational AI software, the Arm — Meta partnership is a full-stack collaboration scaling AI across every layer of compute — delivering the next era of intelligent, efficient, and connected experiences to billions worldwide.




