Meta has entered into a multi-year agreement with AMD to power the AI infrastructure with up to 6GW of AMD Instinct GPUs, the silicon computing technology used to support modern AI models.
As part of the new agreement, AMD will work with the company to align their roadmaps across silicon, systems, and software enabling and fostering vertical integration throughout the infrastructure stack.
This collaboration between software and hardware will enable the company to innovate rapidly and at scale, as per the statement.
“We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence. This is an important step for Meta as we diversify our compute. I expect AMD to be an important partner for many years to come.”
The agreement with AMD is a key component of the Meta Compute initiative, aimed at significantly scaling infrastructure for the era of personal superintelligence and ensuring long-term leadership in AI.
By diversifying partnerships and technology stacks, the company is constructing a more resilient and flexible infrastructure. This strategy combines hardware from various partners with the rapidly advancing Meta Training and Inference Accelerator (MTIA) silicon program.
Dr. Lisa Su, Chair and CEO at AMD, said that, “This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs, and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout.”