NextFin

Josh Woodward Vibe-Codes a Fishing Game Live with Gemini 3

Summarized by NextFin AI
  • Josh Woodward, VP of Google Labs, demonstrated Gemini 3 at a live event, showcasing its AI Studio's vibe coding capabilities for real-time app development.
  • The demo involved creating a quirky fishing game based on a deliberately loose prompt, illustrating the model's ability to generate and debug applications interactively.
  • Woodward emphasized the importance of the planning phase in AI Studio, which aids in transparency and understanding of the generated components.
  • The session highlighted the potential of agentic workflows in simplifying complex processes, showcasing how users can build applications without extensive programming knowledge.
NextFin News -

Josh Woodward, vice president responsible for Google Labs and the Gemini product, joined a live Gemini 3 demonstration in Mountain View on 2025-11-18 to show AI Studio's "vibe coding" capabilities. The session, presented by the Google AI Studio team alongside colleagues from the model and labs groups, invited guests to build apps live with Gemini 3 and to iterate on them in real time. During the segment Woodward dictated a single prompt and let the model generate a working mini-game while the hosts and attendees observed the planning and debugging steps.

The prompt: a deliberately strange fishing game

Woodward read the prompt aloud and framed the experiment as intentionally loose: Create a fishing game. You're on a wooden boat. When you throw the fishing line, you should catch a fish, a whale, or a tire. He specified the control scheme and the setting: When you hit space bar, it should throw the fishing line. If you catch the whale, you have to hit space bar rapidly to reel him in. The pond should be in a lake in Oklahoma at sunset. He acknowledged the eccentricity of the brief — This could be a terrible prompt. We'll see what Gemini will recover for me. — but proceeded to let the model run the one-shot generation and iterative fixes.

Planning step and file-level explanations in AI Studio

As the model produced the application, Woodward pointed to the usefulness of AI Studio's planning phase and the way generated artifacts are explained. He highlighted the interface feature that surfaces simple descriptions for generated files, noting how hovering over checkboxes showed what each file does. He called out an example artifact by name: this sky.tsx background component for Oklahoma sunset, and emphasized that the planning step helped make the build process transparent to the user.

Vibe-coding as sculpting and iteration

Woodward discussed vibe coding as an interactive, sculptural process: you're sculpting this thing with the model. He framed AI Studio as enabling rapid iteration, saying that today's models allow for strong, practical iteration cycles: with today's model you really can do great iterations. He also referenced the platform's origins and evolution, recalling early days when the tooling was known internally by different names and describing how the team imagined a new generation of makers who didn't have to be software engineers to build with these tools.

Agentic tooling and Opal

During discussion of experiments and tooling alongside the fishing demo, Woodward and other speakers touched on agent-building environments. Woodward described Opal and similar labs experiments as tools for chaining model calls into multi-step workflows, and the hosts discussed how team members were already using these systems for tasks like documentation checks and certifications. Woodward framed these agentic workflows as part of the broader push to make complex, multi-step processes accessible inside the studio environment.

Live demo: loading, casting and debugging

The generated scene arrived with the requested sunset and a catch log listing fish, wheels and tires. Woodward and the hosts went full screen and attempted to cast: at first the cast did not behave as expected and they discussed an apparent bug. Woodward narrated the interaction and the subsequent fix: When you press cast, it doesn't work. You know, fix this bug. After the agent applied the fix the line went into the water and Woodward made the first cast. He described the surprising result of the first attempt: I got a whale. I haven't had one. First cast. First cast is a whale. He also warned about the challenge of reeling it in: If you catch the whale, you have to hit space bar rapidly to reel him in. The team observed that while the whale was difficult to fully land, the app demonstrated the intended behaviors.

Takeaway and next steps

As the demo wrapped, Woodward and the hosts summarized the outcome plainly: Takeaway is that actually worked as in time. He was encouraged to continue building and to share the finished example; the hosts asked him to send and tweet the app when complete. The closing exchange emphasized the demo's purpose as both a showcase and a work-in-progress: the example illustrated how a single prompt, combined with AI Studio's planning and iterative tooling, can produce a playable prototype in front of an audience.

Practical notes about the session

The segment was presented as part of Google’s Gemini 3 launch demonstrations in Mountain View and was staged to showcase AI Studio's build tab, one-shot generation, iterative refinement and agentic extensions such as Antigravity and Opal. Woodward appeared in his role with Google Labs and Gemini leadership, and the demo included other team members from AI Studio and the model teams who assisted with the live flow and commentary.

References and further viewing: Vibe coding with Gemini 3 — live from Mountain View (video). Background on the Gemini 3 launch: A new era of intelligence with Gemini 3 (Google blog). For developer-focused context and tools announced alongside the model, see Gemini 3 for developers: New reasoning, agentic capabilities (Google Developers).

Explore more exclusive insights at nextfin.ai.

Insights

What is vibe coding and how does it function within AI Studio?

What are the origins and evolution of AI Studio tools?

What feedback have users provided regarding the usability of Gemini 3?

What are the current trends in AI-driven game development?

What recent updates have been announced for Gemini 3's capabilities?

How does Gemini 3 compare to previous versions of the Gemini product?

What challenges are associated with real-time coding and debugging in AI Studio?

What are the implications of agentic workflows in AI development?

How does the fishing game demo illustrate the capabilities of Gemini 3?

What are the potential future applications of vibe coding in various industries?

What limitations exist in the current AI Studio tools for novice users?

How does the interactive aspect of vibe coding enhance the development process?

What role does planning play in the development process within AI Studio?

What are the key features highlighted in the Gemini 3 live demonstration?

What controversies surround the use of AI in game development?

What competitive advantages does Gemini 3 offer over other AI development tools?

How can AI Studio facilitate collaboration among non-engineers in app development?

What are the expected long-term impacts of tools like Gemini 3 on software engineering?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App