On January 22, 2026, Google expanded its Personal Intelligence feature to AI Mode in Search, letting Google AI Pro and Ultra subscribers optionally connect Gmail and Google Photos for personalized answers. The opt‑in feature is rolling out in English to U.S. users via Search Labs and paid tiers.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Google’s move to wire its Gemini 3 models directly into Gmail and Photos via Personal Intelligence is a major step in turning general-purpose LLMs into deeply embedded personal agents. Unlike generic chatbots, this setup gives the model structured access to a user’s life—trips, purchases, relationships, habits—inside an interface people already use constantly: Search. That’s strategically powerful, because it converts Google’s data moats into day‑to‑day AI utility rather than just better ad targeting. For the broader race to AGI, this is less about raw model capability and more about deployment architecture. Google is effectively prototyping what a “context‑rich AI assistant” at scale looks like, including the privacy boundaries, opt‑in flows, and error‑handling it will require. If it works and users accept the tradeoffs, competitors will be pressured to offer similarly deep integrations, accelerating a shift from stateless chat to persistent, personalized AI agents. It also gives Google a real differentiator versus OpenAI‑plus‑Microsoft: a single vendor that owns both the model and most of the personal data exhaust. The flip side is that this kind of intimate integration raises the stakes on alignment and security mistakes. When your search AI can see your inbox and photo history, hallucinations or data leaks become far more consequential than a bad web answer.

