At I/O 2025, Google unveiled its boldest vision yet for an AI-first future An recap of the I/O keynote announcements. Credit: Google

On Tuesday, Google kicked off its annual I/O developer conference. Unlike previous years, Android OS barely got a mention during the opening keynote; Instead, Google devoted the entire keynote to its latest slew of AI products and features, underscoring not only the current focus of the tech and media industries, but also the urgency that the Alphabet company must be feeling given the rising treats to its core search business.

AI Mode & The Agentic Web

The biggest announcement out of the opening keynote was that “AI Mode,” a Gemini-powered conversational tab that sits alongside Google search, will now be available to all U.S. users. Google has been quietly testing AI Mode with select users since early March this year, aiming to provide a more chatbot-like AI search experience in contrast to the AI Overviews summaries that had been widely rolled out over a year ago.

It’s clear that Google is all systems go in regard to replacing the traditional search experience, which has been the very foundation of its business, with chatbot-driven AI search. The imperative to do so clearly stems from the shifting consumer behaviors and surging rivals providing AI-centric search experiences. A recent study by adMarketplace found that 55% of U.S. consumers have changed how they search for products and brands, shifting away from traditional search engines like Google and embracing platform-native search experiences, such as searching on Amazon or social platforms.

This accelerating shift to AI search is also raising alarms about the future of the ad-supported open web. When chatbots deliver summarized answers without requiring a user to click through to a source, the economic model that has sustained online publishers and businesses begins to erode. This could lead to a web that’s increasingly gated, paywalled, or locked into walled gardens, where discovery and monetization are controlled by a handful of dominant platforms. For Google, this creates a tricky dilemma: it must lead the shift toward AI-powered search to stay competitive, even though doing so may undercut the ad-supported web ecosystem that it helped build and profited from.

This paradigm shift is also highlighted by Microsoft’s announcement at its Build developer event on Monday, where the Windows company emphasized the idea of building for an “agentic web,” aka, a web designed to be navigated by AI agents rather than human users directly. The logic goes that, as people increasingly rely on AI agents to perform tasks, make decisions, and retrieve information on their behalf, the existing ad-supported model for the web will no longer be sustainable. Instead, creators and brands may need to optimize for AI intermediaries to ensure their content can reach the intended audiences.

Even in this scenario, however, advertising is not completely off the table. The AI chatbots are rather costly to run, and scaling them will certainly cost a pretty penny, so the money has to come from somewhere. Google wants the power users to pay $250 a month for its new AI Ultra plan to access the latest Gemini models and features, but let’s face it, most people won’t be paying to use Google search, AI or not. Just like ad-supported tiers gradually eroded the previously ad-free streaming services to drive growth and achieve a profitable scale, so will AI search adopt advertising at its chatbot layer before long.

To that end, Google already announced plans to begin testing ads within AI Mode. With an initial focus on Search and Shopping ads, they will appear either beneath or integrated into AI-generated responses “where relevant,” Advertisers using Performance Max, Shopping, or Search campaigns with “broad match” will be eligible to appear in AI Mode.

The Universal Assistant and Personal Context

The uncertain future of search aside, the most impressive demo from this Google I/O keynote was, for my money, this pre-recorded video showcasing a multimodal Gemini in action:

Google on X (formerly Twitter): “Last year we unveiled Project Astra on the #GoogleIO stage. See how it’s evolved since then – and what might be possible in the future. pic.twitter.com/ObMi7gFrrl / X”

Last year we unveiled Project Astra on the #GoogleIO stage. See how it’s evolved since then – and what might be possible in the future. pic.twitter.com/ObMi7gFrrl

This impressive demo is born out of Project Astra, which aims to turn Gemini into a multimodal “universal AI assistant” that can simply take care of things for you, both on-demand and proactively. For instance, when a user pointed their camera at a bicycle component, Gemini recognized the part and offered guidance on repairs, even suggesting relevant tutorial videos. No release date or rollout plan has been announced, but this demo offered a promising glimpse into how AI agents with contextual understanding can provide proactive assistance.

Moreover, Gemini is gaining more agentic capabilities this summer, thanks to an upcoming integration of Project Mariner into AI Mode this summer, enabling it to perform tasks such as purchasing event tickets, making restaurant reservations, and booking local appointments. For example, users can ask AI Mode to “Find two affordable tickets for this Saturday’s Reds game in the lower level,” and the system will search across various platforms like Ticketmaster and StubHub to provide options that meet the specified criteria. Similarly, it can handle restaurant bookings through services like Resy and local appointments.

It’s worth noting that Google might just be using AI Mode as a testing ground for AI features before adding them to its main search product. “AI Mode is where we’ll first bring Gemini’s frontier capabilities, and it’s also a glimpse of what’s to come,” the company wrote in an event recap post. “As we get feedback, we’ll graduate many features and capabilities from AI Mode right into the core search experience in AI Overviews.” For example, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode later this year, to allow users to get a more comprehensive report on a complicated set of questions.

Another case in point: Search Live will debut in AI mode this summer, allowing users to talk back-and-forth with Search about what they see in real-time using their camera. “It’s like being on video chat with Google search” is how one Google executive phrased it. While Google Lens has long been available to facilitate visual search, a combination of voice interactivity and visual context points toward a more immersive, multimodal search experience in the near future.

In addition, Google highlighted a new set of AI shopping features, including a Gemini-powered virtual try-on tool that can generate a realistic image of you wearing a specific item, using a photo you’ve uploaded. Google told Axios that it worked with Levi’s, Abercrombie, Staud, Pistola Denim, Quince and other brands for the try-on tech.

Google first launched its virtual try-on tool in 2022, but at the time, it relied on AI-generated models representing a range of body types. Shoppers would choose the model that most closely resembled their own body shape, rather than uploading a personal photo.

This level of personalization gets at Google’s key differentiation strategy to stand out against rival AI assistants. The “personal context” that Google users store across various Google services from Gmail to Google Calendar will soon be utilized to personalize the responses that reflect your unique preferences and behaviors. For instance, it can suggest tailored replies in Gmail by analyzing your previous emails, common word usage, and preferred greeting styles.

Obviously, the more Gemini knows about a user, the more useful and tailored it becomes. But that utility hinges on users giving Google deeper access to sensitive personal data. This heightens anxiety around surveillance, misuse, or unintended exposure. Even with Google’s assurances that this data won’t be used to train public models, skepticism remains. The promise of a “universal assistant” or “automated AI agent” that acts on your behalf raises the stake for user trust even further.

AI-Led Google Glass Revival

Over a decade ago, the first generation of Google Glasses debuted in 2013 and was quickly met with widespread criticism for its bulky design and privacy concerns, leading to its users being pejoratively labeled as “glassholes.” Has our culture collectively evolved enough to allow smart glasses to become a thing now? The early success of the Meta Ray-Ban glasses, which has sold over 2 million units since their launch in September 2023, has seemingly given Google the confidence to try again with mixed reality glasses.

This time, Google is re-entering the headset market through fashionable collaborations with the likes of Samsung, Warby Parker, and Gentle Monster. Google is working with Warby Parker and Gentle Monster to create Android XR smart glasses that combine style with functionality. These glasses are designed to be lightweight and fashionable, incorporating features like open-ear speakers, in-lens displays, and cameras. The integration of Gemini AI allows for real-time assistance, including navigation, language translation, and contextual information overlays.

Meanwhile, the partnership with Samsung has led to the development of Project Moohan, which Google unveiled as an XR headset running on the Android XR platform and powered by Gemini. This headset aims to provide immersive experiences with high-resolution displays and advanced spatial computing features, similar to Apple’s Vision Pro headsets.

Underpinning both devices is Android XR, a dedicated operating system designed to support a range of mixed reality devices. Google said this platform is built to leverage Gemini’s growing capabilities to offer contextual and intelligent user interactions. These include real-time language translation, with on-the-fly subtitles appearing directly in the user’s field of view, or native AR navigation experiences that respond to the direction of a user’s head and gaze — both of which Google demoed on stage during the I/O Keynote.

For the new Google glasses — or any smart glasses — to gain mainstream traction, it must deliver clear, everyday value that goes beyond novelty. The original product failed largely due to a lack of compelling use cases and social acceptance. But the rise of AI — particularly Gemini’s contextual capabilities — creates new opportunities for hands-free assistance in navigation, translation, and visual search. Still, tech alone won’t make it work.

Despite being mocked at launch for sticking out of ears awkwardly, Apple’s AirPods gradually overcame its form factor and gained cultural acceptance through a combination of functionality, and subtle cultural repositioning. What really changed the perception was behavioral normalization. As more people wore them in public, AirPods stopped looking strange and started looking expected.

Similarly, smart glasses, no matter how cool they look or how useful they are, won’t be accepted overnight. Like AirPods, they need to fade into daily life, not scream “I’m wearing a tech gadget!” But if they do prove to be the best device for people to interface with a multimodal AI agent — which, by the way, seems to be where most of the digital interactions are being abstracted down to — then there is no doubt that the cultural perception around smart glasses will eventually catch up.

AI-first wearable devices are still a frontier for AI companies to explore. News broke on Wednesday that OpenAI has acquired io, a hardware startup founded by former Apple design chief Jony Ive, for about $6.5 billion, signaling the ChatGPT maker’s strategic move into the AI hardware domain. Naturally, it is now all eyes on Apple to see how they’d respond to the escalating AI race, both in terms of the model capabilities and potential integration with dedicated hardware devices.

Overall, Google’s I/O announcements offered a glimpse into a future where the path from inquiry to brand interaction is shorter, more conversational, and less reliant on traditional SEO or ad formats. To win in this environment, brands must invest in structured, trustworthy content, strengthen their off-site reputation, and be ready to adapt to a more proactive, AI-mediated discovery experience.

Google Activates AI Mode: What Brands Need to Know about Its Latest AI Products was originally published in IPG Media Lab on Medium, where people are continuing the conversation by highlighting and responding to this story.