Business US

Meta Builds AI Infrastructure With NVIDIA

Meta’s AI Roadmap Supported by Large-Scale Deployment of NVIDIA CPUs, Networking and Millions of NVIDIA Blackwell and Rubin GPUs

News Summary:

  • Meta expands NVIDIA CPU deployment and significantly improves performance per watt in its data centers.
  • Meta scales out AI workloads with NVIDIA Spectrum-X Ethernet, supporting network efficiency and throughput.
  • Meta has adopted NVIDIA Confidential Computing, enabling AI capabilities while protecting user privacy.

NVIDIA today announced a multiyear, multigenerational strategic partnership with Meta spanning on-premises, cloud and AI infrastructure.

Meta will build hyperscale data centers optimized for both training and inference in support of the company’s long-term AI infrastructure roadmap. This partnership will enable the large-scale deployment of NVIDIA CPUs and millions of NVIDIA Blackwell and Rubin GPUs, as well as the integration of NVIDIA Spectrum-X™ Ethernet switches for Meta’s Facebook Open Switching System platform.

“No one deploys AI at Meta’s scale — integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of NVIDIA. “Through deep codesign across CPUs, GPUs, networking and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.”

“We’re excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Mark Zuckerberg, founder and CEO of Meta.

Expanded NVIDIA CPU Deployment for Performance Boost
Meta and NVIDIA are continuing to partner on deploying Arm-based NVIDIA Grace™ CPUs for Meta’s data center production applications, delivering significant performance-per-watt improvements in its data centers as part of Meta’s long-term infrastructure strategy.

The collaboration represents the first large-scale NVIDIA Grace-only deployment, supported by codesign and software optimization investments in CPU ecosystem libraries to improve performance per watt with every generation.

The companies are also collaborating on deploying NVIDIA Vera CPUs, with the potential for large-scale deployment in 2027, further extending Meta’s energy-efficient AI compute footprint and advancing the broader Arm software ecosystem.

Unified Architecture Supports Meta’s AI Infrastructure
Meta will deploy industry-leading NVIDIA GB300-based systems and create a unified architecture that spans on-premises data centers and NVIDIA Cloud Partner deployments to simplify operations while maximizing performance and scalability.

In addition, Meta has adopted the NVIDIA Spectrum-X Ethernet networking platform across its infrastructure footprint to provide AI-scale networking, delivering predictable, low-latency performance while maximizing utilization and improving both operational and power efficiency.

Confidential Computing for WhatsApp
Meta has adopted NVIDIA Confidential Computing for WhatsApp private processing, enabling AI-powered capabilities across the messaging platform while ensuring user data confidentiality and integrity.

NVIDIA and Meta are collaborating to expand NVIDIA Confidential Compute capabilities beyond WhatsApp to emerging use cases across Meta’s portfolio, supporting privacy-enhanced AI at scale.

Codesigning Meta’s Next-Generation AI Models
Engineering teams across NVIDIA and Meta are engaged in deep codesign to optimize and accelerate state-of-the-art AI models across Meta’s core workloads. These efforts combine NVIDIA’s full-stack platform with Meta’s large-scale production workloads to drive higher performance and efficiency for new AI capabilities used by billions around the world.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button