Meta is accelerating development of its in-house AI chips, unveiling plans to release four new generations of its Meta Training and Inference Accelerator (MTIA) processors over the next two years to support the company’s rapidly expanding AI workloads.
According to a Meta blog post, the custom silicon is designed to power ranking, recommendation and generative AI systems across Meta’s platforms while improving efficiency compared with general-purpose processors. The company already deploys hundreds of thousands of MTIA chips to run inference workloads for organic content and advertising systems.
The next phase of the roadmap includes MTIA 300, which is already in production for training recommendation and ranking models. Future generations – MTIA 400, 450 and 500 – are expected to handle a broader range of tasks, with a particular focus on generative AI inference through 2027.
Meta said its chip strategy emphasizes rapid development cycles, releasing new designs roughly every six months, faster than the typical one-to-two-year cadence in the semiconductor industry.
The company is also pursuing a “portfolio” approach, combining its custom silicon with chips from external vendors to scale AI infrastructure.
Read the Meta blog post.