People Matters Logo

Meta partners with AMD to power next phase of AI growth

• By Samriddhi Srivastava
Meta partners with AMD to power next phase of AI growth

Meta has signed a long-term agreement with Advanced Micro Devices to supply up to 6 gigawatts of AMD Instinct GPUs, deepening its push to scale artificial intelligence infrastructure.

The multi-year partnership will see AMD power a significant portion of Meta’s AI compute capacity, as the social media group accelerates investment in what chief executive Mark Zuckerberg has described as “personal superintelligence”.

Under the agreement, the companies will align product roadmaps across silicon, systems and software. Meta said the deal forms part of a broader effort to build a more flexible and resilient infrastructure stack, reducing reliance on any single supplier as AI workloads intensify.

Shipments to support the first deployments are scheduled to begin in the second half of 2026. The systems will run on Helios, a rack-scale architecture jointly developed by the two companies and unveiled at last year’s Open Compute Project Global Summit.

AMD chief executive Lisa Su described the arrangement as a “multi-year, multi-generation collaboration” spanning Instinct GPUs, EPYC CPUs and rack-scale AI systems. The partnership, she said, would deliver high-performance, energy-efficient infrastructure tailored to Meta’s AI workloads and support one of the industry’s largest AI deployments.

Zuckerberg said the agreement marked an important step in diversifying Meta’s compute base. “I expect AMD to be an important partner for many years to come,” he said.

The deal sits within Meta’s wider “Meta Compute” initiative, which seeks to expand infrastructure capacity to meet surging demand from large language models and AI-driven services across its platforms. In addition to external partnerships, the company is advancing its in-house Meta Training and Inference Accelerator (MTIA) silicon programme.

By adopting what it calls a portfolio-based approach, Meta is combining hardware from multiple suppliers with its own custom chips. The strategy mirrors a broader shift across the technology sector, where hyperscalers are redesigning data centre architecture to handle increasingly complex AI training and inference tasks.

The scale of the agreement — measured in gigawatts of compute capacity — underscores the intensifying competition among chipmakers to secure long-term AI contracts with major cloud and platform companies. It also reflects Meta’s ambition to remain at the forefront of AI development as rivals step up spending on next-generation infrastructure.

With first deployments set for 2026 and roadmaps aligned across hardware and software, the partnership positions AMD as a central supplier in Meta’s AI expansion, while giving Meta additional leverage and flexibility as the global AI race accelerates.