Q1 2026 was the most consequential quarter for AI search since ChatGPT first added retrieval. Google's AI Mode finished its global rollout, ChatGPT Search graduated from beta with a redesigned shopping experience, the llms.txt standard tipped into mainstream adoption, and citation patterns concentrated dramatically around a handful of source types. If your GEO strategy was set in 2025, parts of it are already out of date.
This article unpacks the five shifts that mattered most, what each one means for the brands that want to be cited inside AI answers, and what to actually do differently this quarter. Treat it as a calibration check on the playbook you've been running.
"The brands that win in Q2 won't be the ones who do the most — they'll be the ones who recalibrate fastest. Q1 quietly rewrote the rules of citation, and most teams haven't noticed yet."
1. Google AI Mode finished its global rollout
Google's AI Mode — the conversational, multi-turn search experience that began as a US-only Labs experiment in 2025 — became the default for an estimated 38% of all Google queries by the end of March 2026. The expansion into the UK, EU, India, and APAC markets was the largest behavioural shift in search since the introduction of universal blue links twenty years ago.
The practical impact for brands is twofold. First, traditional organic CTR continued its slow decline as users increasingly resolve queries inside the AI Mode panel without clicking anywhere. Second, the sources that AI Mode cites are not the same as the sources that traditionally rank in position 1–3. Google's generative system favours sources with strong entity signals, recent updates, and verifiable expertise — meaning a page that ranks #4 in classic results may be cited far more often than the page in #1.
For a deeper look at how Google's generative system selects which sources to cite, see how Gemini selects sources. And if you want the broader context for why this matters, our piece on AI search vs Google search covers the competitive dynamic between the two.
2. ChatGPT Search exited beta with a shopping-first redesign
OpenAI moved ChatGPT Search out of beta in February 2026 and used the launch to ship a heavily redesigned shopping experience. Product comparison cards, structured price data, and direct-to-checkout integrations are now native to the interface. The result is that ChatGPT is increasingly a destination for high-intent commercial queries, not just informational ones.
The implications for B2C and DTC brands are significant. ChatGPT's product surfaces draw heavily from a narrow set of sources: G2 and Capterra (for software), Wirecutter and Wired (for consumer tech), Reddit threads (for "best of" comparisons), and well-structured product pages with rich Product schema. Brands without a presence in these specific sources are effectively invisible to ChatGPT's commercial layer regardless of how strong their organic SEO is.
The flip side is that ChatGPT continues to be remarkably learnable in its citation behaviour — once you understand its logic. Our deep dive on how ChatGPT decides which brands to recommend remains the practical reference for getting into those product surfaces.
3. The llms.txt standard tipped into mainstream adoption
Eighteen months after Jeremy Howard first proposed it, /llms.txt has crossed the threshold from "interesting idea" to expected infrastructure. As of April 2026, every major AI assistant — including ChatGPT, Perplexity, Claude, and Gemini — actively reads llms.txt files when crawling websites for source material. Major tech brands (Stripe, Vercel, Anthropic, Cloudflare) ship one as standard, and the format has been adopted as a recommendation in Google's official guidance.
An llms.txt file lives at the root of your domain (like robots.txt) and gives AI crawlers a curated, markdown-formatted index of your most important content with summaries. It's not a replacement for structured data — it's complementary. Where JSON-LD schema tells machines what specific entities and pages mean, llms.txt tells them which pages on your site are worth reading and in what order.
If you only do one thing this quarter, it's this: publish a well-structured llms.txt file that points AI crawlers to your About page, your most authoritative product pages, and your highest-quality long-form content. This works hand-in-hand with the practices in our guide on optimising your About page for LLM citation — together they give the AI a clear, prioritised view of who you are and what to surface.
4. Citation patterns concentrated around a smaller set of sources
One of the most striking findings from Q1 was how much AI citations narrowed. Across an analysis of 50,000 AI-generated responses to commercial queries, more than 60% of cited sources came from just five categories: Wikipedia, Reddit, official brand domains, mainstream news publications, and review aggregators (G2, Capterra, Trustpilot, TripAdvisor depending on category). The long tail of mid-authority blog content lost ground in nearly every vertical.
This concentration is bad news for content marketers who built strategies around volume — and excellent news for brands that built strategies around authority. The brands that are winning citations now are the ones who invested in the third-party signals that AI models trust most: industry analyst coverage, peer-reviewed third-party reviews, presence on the major community platforms, and substantive Wikipedia and Wikidata representation.
The implication is that GEO is increasingly a PR and authority discipline, not a content-volume discipline. Our complete guide to building brand authority for AI assistants walks through the playbook, and the piece on E-E-A-T and GEO covers the credibility signals that matter most.
5. AI agent traffic emerged as its own category
Throughout Q1, "AI agents" — autonomous browsing systems like ChatGPT's Agent Mode, Anthropic's Computer Use, and Perplexity's Comet browser — moved from research demo to real, measurable traffic. Some publishers reported 8–12% of their traffic in March came from identifiable agent user-agents, up from less than 1% in December 2025.
Agent traffic behaves differently from both human traffic and traditional search-engine bots. Agents fetch a page, summarise it, and use the summary to make a decision on behalf of a user — they don't browse, they don't scroll, and they typically don't return. The optimisation problem is no longer "how do I keep this user on the page?" but "how do I make sure the AI's summary of my page is correct, complete, and persuasive?"
This is a fundamentally new SEO surface and we're still in the early days of understanding it. The companion read here is our piece on the death of blue links — agent traffic is the most concrete example of what comes after click-through behaviour stops being the dominant pattern.
What this means for your Q2 strategy
Pulling these five shifts together, the recalibration brief for Q2 looks like this:
- Audit your AI Mode and ChatGPT Search visibility for your core commercial queries, separately from your traditional SERP performance. Treat them as distinct surfaces with distinct rankings.
- Ship an
llms.txtfile this week. It takes an afternoon and removes a guessing game from how every major AI parses your site. - Reallocate budget from content volume to authority building. One Wirecutter mention or one well-cited Reddit thread is now worth more than ten mid-authority blog placements.
- Instrument for agent traffic alongside human and bot traffic. Watch for patterns in which pages agents fetch — those are the pages doing the load-bearing work for your AI visibility.
- Re-test cross-platform consistency. Run the same brand-recall query on ChatGPT, Claude, Gemini, and Perplexity and compare. Major divergences point to gaps in specific platforms — for example, if Claude's answer is consistently weaker, our piece on why Claude ignores your brand covers the most common causes.
None of this changes the seven underlying factors that determine AI visibility — entity recognition, mention frequency, sentiment, source authority, structured data, cross-platform consistency, and recency — covered in our breakdown of the seven factors that determine your AI visibility score. What changed in Q1 was the relative weighting of each factor and the specific tactics that move them. Your strategy still needs to address all seven; it just needs to address them differently than it did six months ago.
The brands that finish 2026 with strong AI visibility will be the ones who treat this as an ongoing calibration discipline, not a one-time setup. If you haven't run a full GEO audit since the start of the year, the changes above are reason enough to do one now — our step-by-step GEO audit guide walks through the methodology. To benchmark where your brand currently stands across all five major AI assistants, use Sight's AI visibility dashboard →