
Meta is splitting its AI organization into two teams—one focused on consumer products, another on AGI research—as the company scrambles to keep pace with OpenAI and Google while hemorrhaging the talent that built its flagship Llama models.
Key Points:
- 78% of the original Llama research team has left Meta, with many joining French rival Mistral AI
- The new structure creates an AI Products team for Meta AI features and an AGI Foundations unit for Llama development
- No job cuts reported; some internal leadership shifts occurred.
The restructuring, announced in an internal memo by Chief Product Officer Chris Cox this week, comes at a precarious moment for Meta's AI ambitions. Of the 14 researchers whose names appear on the landmark 2023 paper that introduced Llama to the world, only three remain at Meta: research scientist Hugo Touvron, research engineer Xavier Martinet, and technical program leader Faisal Azhar.
The talent drain has been particularly acute to French startup Mistral AI. The Paris-based company, founded by former Meta researchers Guillaume Lample and Timothée Lacroix, has recruited five authors from the original Llama paper, including Baptiste Rozière, Marie-Anne Lachaux, and Thibaut Lavril. What makes this especially painful for Meta is that Mistral is building advanced open-weight AI models that directly compete with Meta's Llama family.
The departures aren't limited to rank-and-file researchers. Joëlle Pineau, who led Meta's Fundamental AI Research (FAIR) group for nearly eight years and was pivotal in Llama's development, announced her departure in April. Meanwhile, the remaining departed talent has spread across other major AI players: Naman Goyal joined Thinking Machines Lab, Aurélien Rodriguez is now at Cohere, Eric Hambro moved to Anthropic, Armand Joulin joined Google DeepMind, Gautier Izacard landed at Microsoft AI, and Edouard Grave joined open-science AI lab Kyutai.
Under the new organizational structure, Connor Hayes will lead the AI Products team, which oversees the Meta AI assistant, Meta's AI Studio, and AI features within Facebook, Instagram, and WhatsApp. The AGI Foundations unit, co-led by Ahmad Al-Dahle and Amir Frenkel, will handle Llama models and efforts to improve capabilities in reasoning, multimedia, and voice.
ByteDance is reportedly planning to spend more than $12 billion on AI infrastructure in 2025, including $5.5 billion on AI chips in China and $6.8 billion overseas. That massive investment is paying off—ByteDance's revenue jumped 29% to $155 billion in 2024, with international sales growing 63% to $39 billion, and the company is targeting $186 billion for 2025, nearly matching Meta's projected $187 billion.
The competitive pressure isn't just coming from Chinese rivals. Meta's Llama model ranks 23rd on current LLM leaderboards, behind several Google Gemini models, OpenAI models, Anthropic models, xAI models, DeepSeek, Qwen, and Hunyuan models. Even more concerning, developers are increasingly turning to faster-evolving alternatives like Qwen and DeepSeek following the Llama 4 release, and Meta still lacks a model explicitly focused on reasoning tasks like multi-step problem-solving.
Internal challenges also compound the external ones. Earlier this month, The Wall Street Journal reported that Meta's largest AI model to date, Llama Behemoth, was being delayed due to internal concerns over performance and direction. The company has also cut jobs in its Reality Labs division due to operating losses while still recruiting AI engineers.
Meta is hoping that splitting a single large team into smaller teams will speed product development and give the company more flexibility as it adds additional technical leaders. But Cox's memo reveals the urgency: "Our new structure aims to give each org more ownership while minimizing (but making explicit) team dependencies."
To be fair, Meta isn't standing still, and remains one f the most important contributors to the open AI ecosystem. They recently launched the Llama Startup Program, offering eligible U.S.-based companies direct support from its Llama team and potential funding of up to $6,000 per month for six months. And the company introduced Llama 4 Scout and Llama 4 Maverick in April 2025, touting them as "the first open-weight natively multimodal models with unprecedented context length support."
But the question is whether Meta can rebuild its AI expertise fast enough. The eleven researchers who left Meta since the Llama paper's publication had each averaged more than five years with the company, indicating a departure of deeply embedded researchers rather than short-term contractors. For a company that projected its generative AI products would generate between $2 billion and $3 billion in revenue in 2025, with long-term forecasts ranging from $460 billion to $1.4 trillion by 2035, losing the architects of its AI foundation couldn't come at a worse time.
The bottom line is that as Meta strives to solidify its position in the AI-driven future, the departure of a significant cohort of its foundational Llama talent to competitors, particularly Mistral, presents a clear hurdle. The reorganization may help the company move faster, but it's starting from behind—and with much of its original brain trust now working for the competition.