TLDR
- Google unveiled a host of new AI capabilities to be integrated into Search via AI Mode, including agentic checkout and virtual try-ons.
- Google rolls out Gemini 2.5 Flash, the latest from its flagship AI model family.
- Warby Parker will be making smart glasses based on Android XR, which is powered by Gemini 2.5.
At its I/O 2025 developer conference today, Google unveiled a sweeping reimagining of its core products, with AI deeply embedded across Search, Android, shopping, smart devices and creative tools.
The event spotlighted Google’s most advanced AI model yet, Gemini 2.5, which now powers a wide range of products from real-time search and agentic shopping assistants to wearable glasses and generative image tools.
“We are in a new phase of the AI platform shift,” said Google CEO Sundar Pichai.
The centerpiece of the conference was the overhaul of Google Search. Executives introduced an upgraded ‘AI Mode,’ a tab in Search that transforms it into an intelligent, agentic assistant.
With Gemini 2.5 powering AI Mode, it enables users to pose longer, more complex queries, follow up in conversation-like interactions, and even ask it to complete multistep tasks – such as finding event tickets or booking reservations.
Google also previewed ‘Deep Search,’ an advanced research tool that issues hundreds of queries on the user’s behalf and synthesizes the findings into expert-level reports. “You can bring your hardest question to the search box,” said Liz Reid, head of search, calling it a “glimpse of what’s to come.”
AI Overviews, which summarize answers from across the web, has expanded to over 1.5 billion monthly users. These overviews will soon benefit from the Gemini 2.5 model for improved accuracy and speed.
Google also unveiled two new subscription plans: Google AI Pro (to replace AI Premium) for $19.99 a month and Google AI Ultra for $249.99 a month.
3D chats and live translation
Coming soon is Google Beam, which enables realistic 3D live chats that make people seem like they’re in the same room as you.

Live translation in Google Meet is already available to subscribers in English and Spanish, with more languages coming soon. Enterprise users will have access later this year.
Shopping becomes agentic
E-commerce is another frontier being reshaped. Google is embedding shopping functionality directly into Search, with a new ‘agentic checkout’ feature that can track prices, add items to carts, and complete purchases with one tap. Users can set target prices and let the AI monitor and act automatically when conditions are met.
A new virtual try-on tool leverages a generative image model trained for fashion. Using a full-body photo, users can see how clothes would realistically look on them – taking body shape, fabric draping, and lighting into account. This feature is available in Labs now and rolling out more broadly in the coming months.
Android XR and smart glasses
Google also revealed new advancements in wearable technology with the debut of Android XR, its platform for extended reality devices. The company is working with Samsung to launch devices such as immersive headsets in Project Moohan.
A live demo showed glasses powered by Gemini responding to real-world prompts: identifying music playing, recognizing landmarks, booking coffee meetings, and even performing live translation during a multilingual conversation.
Partnerships with Warby Parker and Gentle Monster will bring consumer-ready glasses to market later this year.
Gemini 2.5 and multimodal models
Throughout the conference, Google touted rapid progress in its Gemini model family. Gemini 2.5 Pro now leads most major AI benchmarks, including reasoning, code generation, and multimodal tasks. The Gemini app has over 400 million monthly active users, and the model processes over 480 trillion tokens a month across products, up from 9.7 trillion a year ago.
An even more advanced version, Gemini 2.5 Pro Deep Think, will be released to trusted testers soon. It excels in complex reasoning tasks and math competitions like USAMO 2025.
For creators, Google announced Imagen 4 and Veo 3, its next-generation image and video generation models. Imagen 4 delivers rich visuals with detailed typography, texture, and creative control. Veo 3 brings high-fidelity video generation with native audio support – capable of producing full scenes with dialogue, environmental sound, and movement.
Google also showed how Gemini can turn rough sketches into working 3D web apps, create podcasts from documents, and simulate complex physics scenes.
One AI assistant, everywhere
Gemini is increasingly being positioned as a universal assistant that is personal, proactive, and powerful. New features include the following:
- Personal Context: Gemini can reference past emails, calendar events, and Google Docs to craft customized replies or recommendations – with user permission.
- Gemini Live: Now available on Android and iOS, this lets users interact with Gemini via voice, screen sharing, and camera.
- Canvas: A collaborative space to co-create interactive apps, web pages, quizzes, and podcasts, based on uploaded documents.