NextFin News - Meta Platforms Inc., led by CEO Mark Zuckerberg, unveiled plans to launch a cutting-edge AI model codenamed 'Mango' focusing on advanced video and image generation, with a targeted release in the first half of 2026. This announcement was made public following an internal Q&A session led by Meta’s Chief AI Officer, Alexandr Wang, held late in 2025. Concurrently, Meta is also developing 'Avocado', a next-generation large language model optimized for programming capabilities. Both models form part of Meta’s ambitious AI research agenda executed at its Superintelligence Labs, a division formed after the summer 2025 reorganization and bolstered by recruits including more than 20 former OpenAI researchers.
Meta's Mango arrives amid fierce rivalry in generative AI technologies, where companies like Google and OpenAI have accelerated innovations in multimodal models; notably, Google’s Gemini 3 Flash and OpenAI’s upgraded ChatGPT Image generator symbolize escalating competition. Meta's strategy formalizes a pivot away from its previous open-source approach epitomized by the Llama series, transitioning toward proprietary models that restrict external access to core AI weights and architectures. This strategic shift was influenced both by market dynamics — such as the modest reception of Llama 4 and intellectual property concerns arising from competitors' adoption of Llama-based architectures — and by ambitions to accelerate technology leadership.
The underlying rationale integrates the competitive landscape and future AI capabilities: image and video generation have emerged as central engagement drivers for users, with OpenAI CEO Sam Altman characterizing image generation as a 'sticky' and critical feature retaining consumer attention. Meta’s Mango aims to be a competitive response within this high-stakes arena, leveraging extensive data, compute investment, and talent acquisition. The Mango model will not only generate high-fidelity static and dynamic media outputs but also contribute to Meta’s research in 'world models' — systems that synthesize visual input to build a contextual understanding of the environment, representing a frontier for embodied AI cognition and advanced interaction paradigms.
Investment in AI R&D at Meta has surged accordingly, exemplified by a $14.3 billion stake in Scale AI and increased capital expenditure guidance now exceeding $70 billion annually, signaling deep-level commitment. The ambition is to integrate Mango and Avocado into a layered AI ecosystem capable of transformative experiences — from social media content creation to enterprise applications reliant on programming and multimedia generation.
From a market perspective, Meta’s dual-model strategy addresses the increasingly multimodal demands of AI consumers and developers, positioning Mango as a video-image AI complement to text- and code-focused Avocado. This diversification aligns Meta with key industry trends where multimodal AI systems, capable of synthesizing and reasoning across images, video, and language, define the competitive frontier.
While the exact technical specifications and training paradigms for Mango remain undisclosed, the emphasis on video generation alongside images is particularly significant. Video generation demands substantially higher computational resources and complex temporal coherence mechanisms relative to static image models, suggesting Meta is targeting leading-edge performance metrics and user experiences.
Looking forward, Meta’s repositioning away from open-source toward proprietary AI, coupled with aggressive talent acquisition and capital deployment, reflects broader industry shifts where safeguarding intellectual property while competing at scale becomes paramount. The move may also influence ecosystem dynamics, as developers and partners recalibrate strategies to accommodate licensing terms, platform integration models, and access controls around Meta’s upcoming AI offerings.
Ultimately, Mango holds potential to redefine user engagement across Meta’s social and enterprise platforms by enabling richer content creation and interaction capabilities. Combined with world model research, it also positions Meta at the forefront of next-generation AI systems that understand and operate within real-world contexts, driving innovation in augmented reality, virtual environments, and autonomous decision-making systems.
Meta’s announcement reinforces the intensifying AI arms race under the governance of U.S. President Donald Trump’s administration, which has shown a focus on maintaining U.S. leadership in strategic tech sectors. The continued surge in AI investments and model deployment by Meta hints at further competitive dynamics ahead, where proprietary innovations and multimodal AI capabilities will likely dictate leadership in the evolving digital economy.
Explore more exclusive insights at nextfin.ai.