NextFin

Federighi Guides Apple on AI Through Google Deal as Siri Evolves into Conversational Chatbot

Summarized by NextFin AI
  • Apple has entered a $1 billion annual agreement with Google to utilize its Gemini models, aiming to enhance Siri's capabilities and compete with rivals like ChatGPT.
  • The project, codenamed “Campos,” represents a shift from a command-based assistant to a generative AI chatbot, expected to be showcased at WWDC in June 2026.
  • This collaboration allows Apple to leverage Google’s TPU-based cloud infrastructure for complex processing tasks, while maintaining user data privacy through its Private Cloud Compute architecture.
  • The phased rollout of this AI architecture aims to close the intelligence gap by fall 2026, potentially transforming the iPhone into a proactive AI companion.

NextFin News - In a decisive move to reclaim its position in the artificial intelligence arms race, Apple has finalized a landmark agreement with Google to power its next generation of conversational intelligence. According to reports from Bloomberg and industry insiders on January 22, 2026, Apple Senior Vice President Craig Federighi is now steering the company toward a more pragmatic, partnership-heavy AI roadmap. The centerpiece of this strategy is a rumored $1 billion annual deal that grants Apple access to Google’s Gemini models to overhaul Siri, which has long been criticized for lagging behind rivals like OpenAI’s ChatGPT.

The project, internally codenamed “Campos,” represents a fundamental pivot for the Cupertino-based giant. While Federighi previously insisted that Apple’s goal was not to build a standalone chatbot but to integrate AI seamlessly into the user experience, the market reality of 2026 has forced a change in trajectory. The new Siri, expected to be a highlight of the Worldwide Developers Conference (WWDC) in June 2026, will transition from a command-based assistant into a full-fledged generative AI chatbot capable of back-and-forth natural language conversations, image generation, and complex on-screen content analysis.

This strategic shift is driven by the need to modernize the iPhone’s core interface as it faces stiff competition from AI-native hardware and software ecosystems. Under the guidance of Federighi, Apple is reportedly utilizing a custom version of Google’s foundation models, known internally as Apple Foundation Models version 11. This collaboration allows Apple to leverage Google’s massive TPU-based cloud infrastructure for intensive processing tasks that current on-device hardware cannot yet handle autonomously. Despite this reliance on external servers, Apple maintains that user data will remain siloed and protected through its Private Cloud Compute architecture.

The financial and technical implications of the Google deal are profound. By paying an estimated $1 billion annually, Apple effectively outsources the high-risk, high-cost development of frontier LLMs (Large Language Models) while focusing its internal resources on "Apple Intelligence"—the proprietary layer that integrates these models into iOS, iPadOS, and macOS. Data suggests that while Apple’s on-device models excel at low-latency tasks like text summarization and notification prioritization, the Gemini-powered “Campos” will handle the “heavy lifting” of creative generation and deep-web reasoning.

However, the transition is not without its internal tensions. Federighi has had to balance the company’s long-standing “privacy-first” marketing with the data-hungry nature of modern generative AI. To mitigate risks, Apple is reportedly implementing strict limits on how much conversational history Siri can retain, a move that may hinder personalization compared to Google’s native Gemini experience but preserves the brand’s integrity. Analysts suggest that this “cautious course” is a calculated gamble: Apple is betting that users will prefer a slightly less “omniscient” assistant if it means their personal data remains off Google’s primary advertising servers.

Looking ahead, the rollout of this new AI architecture will be phased. While a modest Siri update is expected with iOS 26.4 this spring, the full “Campos” experience is slated for the fall of 2026 alongside the iPhone 18 lineup. This timeline suggests that 2026 will be the year Apple finally closes the “intelligence gap.” If successful, Federighi’s strategy will transform the iPhone from a vessel for apps into a proactive AI companion, potentially sparking a new super-cycle of hardware upgrades as users seek the processing power required for these advanced features. The partnership with Google, once unthinkable given their rivalry in the mobile OS space, now stands as a testament to the era of “co-opetition” defining the AI age.

Explore more exclusive insights at nextfin.ai.

Insights

What are the foundational concepts behind Apple's new AI strategy?

What were the origins of the partnership between Apple and Google?

How is the integration of Google’s Gemini models expected to impact Siri?

What is the current market situation for AI chatbots like Siri and ChatGPT?

What feedback have users provided about Siri's previous performance?

What industry trends are influencing the development of conversational AI?

What recent updates have been made to Apple's AI initiatives?

What policy changes are influencing the artificial intelligence landscape?

How might Siri evolve in the next few years following the Campos project?

What long-term impacts could the partnership between Apple and Google have on the AI market?

What challenges does Apple face in balancing privacy concerns with AI development?

What controversies surround the use of user data in generative AI models?

How does Siri's upcoming capabilities compare to those of ChatGPT?

What are some historical cases of major tech companies collaborating on AI?

What similar concepts exist in the AI chatbot space that Apple could learn from?

What are the financial implications of the $1 billion deal between Apple and Google?

How does Apple's approach to AI differ from that of its competitors?

What risks does Apple take by relying on Google's technology for Siri?

What user experience improvements can we expect from the new version of Siri?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App