Wednesday, June 25, 2025
How do we adapt to Query Fan-Out?

The old model we’ve worked to in SEO has been “one query, one answer, one click”, is collapsing. In its place, a new kind of behavior is emerging, the Query Fan-Out.
Google’s AI Overviews and AI Mode are at the forefront of this shift.
Instead of returning a ranked list based on a single query, these systems generate dozens of related sub-queries in the background. Each sub-query explores a different facet of the user’s intent, pulls data from across the web, and blends the most relevant pieces into one synthesized answer.
We’re no longer optimizing for queries. We’re optimizing for intent networks.
And that means our SEO and content strategies need to evolve.
What is Query Fan-Out?
Query Fan-Out is the process by which a single search prompt, especially in an AI assistant or AI-powered search environment, is broken into a series of related sub-queries.
These aren’t just keyword variations. They represent distinct angles, user intents, and entity relationships tied to the original question.
Example Query Fan-Outs
To show this, I’ve taken some JSON code from AI Mode to show the Query Fan-Outs for some specific queries:
Core Prompt: family friendly all-inclusive hotels in mallorca
Fan-Out:
[
"top family friendly all-inclusive hotels Mallorca",
"best all-inclusive hotels Mallorca with kids water park",
"Mallorca family hotels with kids club and entertainment",
"all-inclusive hotels Mallorca with splash pool for children",
"all-inclusive hotels Mallorca with alcohol restrictions",
"best areas for family holidays Mallorca all-inclusive"
]
Core Prompt: affordable family ev
Fan-Out:
[
"most affordable family electric vehicles 2025",
"best value family EVs with good range 2025",
"top electric vehicles for families under $40,000",
"family-friendly EVs with spacious interior and cargo 2025",
"electric vehicles eligible for federal EV tax credit",
"affordable used EVs for families 2024"
]
Core Prompt: summery dress for casual work event
Fan-Out:
[
"summer dresses for casual work event",
"appropriate dresses for casual office summer party",
"best summer dress for work event 2025",
"midi dresses for casual work function",
"dress options for summer work event 2025"
]
Each one pulls content from different sources. The AI assembles the most useful snippets, passages, tables, and charts, rolled into a single output, bypassing traditional ranked results.
This isn’t science fiction. It’s happening now in AI Overviews, and it’s likely to become the default search experience once monetization and UX friction are solved.
How consistent are these responses?
The answer is not very.
I've run several different pulls of AI Mode responses for similar queries, and the level of consistency and similarity between the responses varied significantly.

Why does this change how we think about SEO
For years, SEO has been driven by simplification.
Keyword → Content → Ranking
But the Query Fan-Out radically changes that model. A single prompt becomes a swarm of related questions, and the AI decides which bits of content answer which parts.
This means:
- Ranking for a keyword matters less than being the best passage for a concept.
- Pages matter less than sections, and how well they answer micro-intents.
- The content that ranks may not even appear in a traditional search result — it may surface as part of an LLM-generated response.
If your content only answers one variation of a query, you’ll likely miss out.
Passage indexing
Passage indexing (also referred to as passage ranking) is a feature in Google Search where individual sections or passages of a page can be ranked independently of the rest of the page.
Traditionally, Google ranked entire pages based on how well the full document matched a query.
With passage indexing, Google can now find a highly relevant paragraph buried deep inside a long page and surface that specific passage in search results, even if the rest of the page isn’t very focused on the topic.
Why did Google introduce it
Google announced passage indexing in late 2020 and rolled it out in early 2021 to improve results for particular, niche, or long-tail queries. These are often:
- Questions buried within big articles
- Long or poorly structured content
- Pages that don’t target one topic
How it works
- Google still indexes the entire page.
- During ranking, its systems identify distinct passages within the content.
- Then it scores each passage for relevance to the query.
- If one passage is an intense match — even if the rest of the page isn’t — the page can rank because of that one part.
Think of it like this:
- Old model: "This page is relevant overall."
- New model: "This specific part of the page is relevant — let’s rank the page based on that."
This shouldn’t be confused with:
- Featured snippets: These are SCRBs (Special Content Result Blocks) that are displayed at the top of SERPs, often answering a question. While both rely on sections of content, featured snippets are display features, not ranking mechanics.
- Passage indexing ≠ . Indexing individual passages: Google still indexes whole pages. It doesn’t store passages separately — it just evaluates them individually when ranking.