NextFin News - Josh Woodward, Vice President of Google Labs and Google Gemini, presented a series of Gemini app updates during the Google I/O 2025 keynote at the Shoreline Amphitheatre in Mountain View, California on May 20, 2025. Introduced during the main keynote program and speaking after Sundar Pichai's remarks, Woodward framed the announcements around a single ambition: make Gemini "the most personal, proactive, and powerful AI assistant."
The following are Woodward's core statements and demonstrations from the keynote, organised by topic.
A personal, proactive, and powerful assistant
Woodward opened by describing the long-standing goal of building an assistant that does more than respond: it should understand and anticipate. He summarised the goal plainly: An assistant that learns you, your preferences, your projects, your world, and you are always in the driver's seat.
He introduced the concept of "personal context," explaining that, with permission, Gemini can use relevant Google information — starting with search history and expanding to other apps — so the assistant becomes an extension of the user. He emphasised user control, saying users can "turn this on" and always view, manage, connect, and disconnect the Google apps that feed their personal context.
"What if your AI assistant was truly yours? Truly yours."
Proactive help and classroom scenarios
Woodward contrasted today's reactive AI with a more proactive assistant that can "see what's coming and help you prepare even before you ask." He offered a concrete scenario: if a student has a physics exam on the calendar, Gemini can notice it a week out and provide personalized quizzes generated from the student's materials, notes, professor content and even photos or handwritten notes. He described a further creative step where Gemini can generate custom explainer videos tailored to a user's interests — for example, explaining thermodynamics using a cycling analogy if it knows the user cycles.
Gemini Live: real-time multimodal conversations
Announcing immediate availability, Woodward described Gemini Live as an interactive, conversational experience that now includes camera and screen sharing and supports over 45 languages in more than 150 countries. He noted that conversations in Gemini Live are longer and more natural than text chats and offered personal endorsement: "it's great for talking through things on the drive into work in the morning." He also said Gemini Live is rolling out free on Android and iOS and will soon integrate with Google apps like Calendar, Maps, Keep and Tasks so users can point the camera and have the assistant perform actions, such as adding invites or transcribing handwriting into Google Keep.
Deep Research and Canvas: working with files and co-creation
For deeper work, Woodward described updates to Deep Research that let users upload their own files to guide research agents — a top-requested feature — and teased upcoming connections to Google Drive and Gmail. He presented Canvas as an interactive co-creation space in Gemini that can transform uploaded reports with one tap into dynamic web pages, infographics, quizzes or podcasts in 45 languages. He emphasised that Canvas supports "vibe coding" and collaborative iteration so creators can build and share interactive apps and simulations with others.
Gemini in Chrome and context-aware browsing
Woodward announced Gemini in Chrome, an assistant available while browsing desktop web pages that understands page context automatically. He gave an example of using Gemini in Chrome to compare long product reviews on a camping site. He said the rollout to Gemini subscribers in the U.S. begins the week of the keynote.
Imagen 4: higher-quality image generation
Introducing Imagen 4 as Gemini's newest image generation model, Woodward described the model as a significant leap: richer colors, finer detail, better handling of shadows and water droplets, and improved text and typography. He demonstrated poster generation and highlighted that Imagen 4 makes creative choices about fonts, spacing and layout — for instance, using dinosaur bones in a font when appropriate — while being faster and higher-quality than prior versions. He also mentioned a super-fast Imagen 4 variant that is ten times faster than the previous model to support rapid iteration.
Veo 3: state-of-the-art video with native audio
Woodward introduced Veo 3 as the next-generation video model and emphasised the model's native audio generation: sound effects, background audio, and spoken dialogue. He described Veo 3 as stronger in physics understanding and visual quality, and he played demonstrations showing character lip movement and emotional, photorealistic scenes with synchronized audio. His statement summarised the significance: "We're entering a new era of creation with combined audio and video generation that's incredibly realistic."
Flow: an AI filmmaking tool combining Veo, Imagen and Gemini
Building on Veo 3, Woodward announced Flow, an integrated AI filmmaking tool available the same day. He described Flow as a workspace where creators provide ingredients — characters, scenes, styles — and then use prompts plus precise camera instructions to assemble, iterate and extend clips. He demonstrated uploading or generating images with Imagen inside Flow, assembling clips, adding a 10-foot-tall chicken by description, trimming or extending scenes, and exporting clips to standard editing software. Woodward presented Flow as a creative environment that preserves scene and character consistency and enables creators to iterate rapidly.
Deep Research outputs and co-creation features
Woodward reiterated Canvas and Deep Research capabilities by showing examples: converting a detailed report about comets into dynamic, shareable products such as interactive simulations and quizzes, and enabling collaborators to jump in, remix and modify shared apps. He emphasised that these features make it easier to distill complex material into digestible, engaging formats that can be reused and shared.
Creator tools and industry partnerships
Throughout his remarks, Woodward highlighted collaborations with artists and filmmakers. He described working with the film community — for example, giving top filmmakers access to Veo — and referenced a partnership to shape Veo's filmmaking capabilities. He played clips created with Veo combined with live action and said these collaborations helped build capabilities such as ingredient-based consistency and explicit camera control for storytellers.
Subscription changes: Google AI Pro and Google AI Ultra
Woodward closed the set of announcements by outlining updates to AI subscription plans. He said Google will replace previous tiers with Google AI Pro (available globally) and Google AI Ultra (initially in the U.S.). Pro will provide a suite of AI products with higher rate limits and special features compared with the free version, including access to the Pro version of the Gemini app. Ultra is positioned for pioneers and early adopters, offering the highest rate limits, earliest access to new features across Google, and additional benefits such as YouTube Premium and large storage allocations. He also said Ultra subscribers will get early access to Flow with Veo 3 and to the 2.5 Pro Deep Think mode when it's ready.
Closing emphasis on user control and capability
Across announcements Woodward emphasised two recurring themes: empowering creators and keeping users in control. He repeatedly returned to the idea that personalization requires consent and management: users choose which Google products feed Gemini and can always manage those connections. At the same time, he framed the new generative media models and tools as practical ways for people to create websites, games, posters, videos and films "in minutes" and to explore ideas they could not easily produce before.
References
Additional context and official details for the Google I/O 2025 keynote and the Gemini app announcements are available from Google's product blog and event pages:
- Gemini gets more personal, proactive and powerful — Google Blog (May 20, 2025)
- Google I/O 2025 keynote — Google Blog (keynote coverage)
- Google AI Ultra: coverage of subscription plans and availability — TechCrunch (May 20, 2025)
Explore more exclusive insights at nextfin.ai.

