Reading view

SEO’s new battleground: Winning the consensus layer

SEO's new battleground- Winning the consensus layer

You could be ranking in Position 1 and still be completely invisible.

I know that sounds counterintuitive. But here’s what’s actually happening:

A potential customer opens ChatGPT or Perplexity and asks, “What’s the best [tool/agency/platform] for [your category]?” Your competitor gets mentioned. You don’t. Your No. 1 ranking did absolutely nothing to help you.

This is the new SEO reality, and it’s catching many smart marketers off guard.

LLMs synthesize consensus across multiple sources, rather than relying on a single source. This means you need corroborating mentions distributed across the web. The game has shifted from ranking to consensus, and if you don’t understand that difference, you’re already losing ground.

Let me break down what’s actually happening and, more importantly, what you can do about it.

From rankings to consensus: What changed and why

Traditional SEO had a clear logic: rank high, get clicks, drive traffic. In this retrieval-based system, Google found pages and users chose which ones to visit.

AI-driven search doesn’t work that way. Systems like Google’s AI Overviews, ChatGPT, and Perplexity are now constructing answers. They pull from dozens of sources, identify which claims appear consistently across credible publishers, and synthesize a single response. 

The data backs up just how significant this shift is: organic CTRs for queries featuring AI Overviews have dropped 61% since mid-2024. Even on queries without AI Overviews, organic CTRs fell 41%. Users are simply clicking less, everywhere.

The technical engine behind this is retrieval-augmented generation (RAG). The AI retrieves content from across the web, gathers potentially dozens of sources, identifies the claims that repeat most consistently across credible publishers, and generates a response based on that consensus.

Your goal isn’t just to publish a great page. It’s to be one of those sources. Repeatedly.

What the consensus layer actually is

Think of the consensus layer as the degree to which multiple AI systems produce consistent, repeatable outputs about your brand. It’s about pattern recognition at scale.

When AI systems encounter your brand described the same way across multiple credible sources, in the same category, with the same expertise, and with the same problems you solve, they build confidence. When they don’t see that pattern? You become a statistical outlier, and outliers get filtered out.

This happens because AI systems are engineered to prevent hallucinations. Their primary defense is corroboration: if multiple independent sources say the same thing, the AI assigns higher confidence to that claim. If only one source says it, the AI can become cautious or ignore it entirely.

This creates a rule most marketers haven’t fully internalized yet: isolated authority isn’t enough. You need distributed credibility.

I’ve seen this firsthand. A client ranking first for a competitive keyword, with solid traffic and strong domain authority, was invisible across ChatGPT. Why? Because that page existed in isolation. No corroboration, no distributed mentions, no external validation. 

As Will Scott wrote: “Brands aren’t losing visibility because they dropped from position three to seven. They’re losing it because they were never cited in the AI answer at all.”

Dig deeper: The infinite tail: When search demand moves beyond keywords

The signals that actually build consensus

So what signals do AI systems actually use? Here’s where to focus your energy.

Traditional authority is table stakes, not a finish line

Backlinks, domain authority, and topical depth remain foundational. But they’re no longer sufficient on their own. They get you in the game; consensus is what wins it.

Unlinked brand mentions matter more than most marketers realize

AI systems scan the web for brand references, even when those mentions aren’t linked. Unlinked mentions are growing in importance as signals for both traditional search and AI visibility. A mention in an industry publication with no link is still a consensus signal.

Nearly 9 out of 10 webpages cited by ChatGPT appear outside the top 20 organic results for the same queries, per a Semrush study. This tells you everything you need to know about how different this game is.

Publisher diversity signals broader credibility

Being mentioned repeatedly on the same domain doesn’t build consensus. Being mentioned across a range of credible, independent publishers does.

Diversity tells AI systems your authority isn’t contained to one corner of the web. It’s recognized broadly across your industry.

Community platforms are consensus gold

Reddit, Quora, and niche forums are becoming major consensus signals. AI systems increasingly pull from community discussions because they represent real user opinions and experiences. 

With Reddit dominating the SERPs, positive brand mentions in relevant subreddits contribute meaningfully to how AI systems perceive you. You can’t fake your way into genuine community trust, you have to earn it.

Entity clarity makes retrieval easier

Search engines use knowledge graphs to understand entities and how they relate to each other. If your brand is inconsistently described across platforms or your category is ambiguous, AI systems struggle to incorporate you into their answers. 

Structured data, schema markup, and JSON-LD are critical here. Google has explicitly stated that “structured data is critical for modern search engines.” The clearer your entity profile, the easier it is for AI to retrieve and cite you.

Get the newsletter search marketers rely on.


How to actually build consensus

Alright, let’s get tactical. Before you start building, you need to know where you stand.

Start with an LLM audit

Open ChatGPT, Perplexity, Gemini, and Google AI Overviews, and start asking questions the way your customers would. 

  • “What’s the best [tool/service] for [problem you solve]?” 
  • “Who are the leading [your category] providers?” 
  • “What do people say about [your brand name]?”

Pay attention to three things: 

  • Is your brand mentioned at all? 
  • If it is, is the information accurate and up to date? 
  • How are you being described relative to competitors? 

You may find outdated information, missing context, or, worse, a competitor owning the narrative in your category entirely.

This audit becomes your baseline. It tells you what gaps to close, what misinformation to correct, and where your consensus footprint is weakest. Only once you know that, should you start building.

Establish your owned media foundation

Your site needs to be technically sound and semantically clear. Use structured data. Establish explicit entity definitions, who you are, what you do, and what problems you solve. Reinforce those same entities and relationships across multiple pages within your site. 

Topic clusters, pillar pages supported by related subtopic content, create semantic reinforcement that signals depth and expertise. Without a strong foundation, nothing else sticks.

Treat earned media as consensus amplification

Press coverage, guest posts, podcast appearances, and expert citations distribute your authority across the web. More than links, digital PR is now about narrative control. 

One placement won’t move the needle. A sustained, coordinated presence across trusted publications will. Monitor your brand-to-links ratio, unlinked mentions alongside traditional link building is now the balanced strategy to pursue.

Publish original research

This is the highest-leverage consensus tactic most brands are underinvesting in. When you create genuinely novel data, an industry benchmark, a proprietary survey, original research, other publishers reference it naturally, journalists cite it, and AI systems incorporate it into answers. Establish yourself as the source for benchmark data in your niche, and you’ll earn citations for years.

Invest in expert-led content

AI systems are trained on vast amounts of text, including articles, research, and interviews. When your team members are consistently positioned as recognized experts, quoted in articles, cited in reports, and contributing bylined pieces, they become recognized entities that AI systems trust. Optimize author profiles with structured data, consistent bylines, and entity markup to reinforce this.

Participate genuinely in communities

This doesn’t mean dropping links in Reddit threads. It means answering questions, contributing knowledge, and building a reputation where your audience already hangs out. 

When users recommend your brand organically because they find it genuinely valuable, that’s your strongest consensus signal.

Dig deeper: Why surface-level SEO tactics won’t build lasting AI search visibility

Measuring what actually matters now

Traditional rankings tell you where you stand in search results. They don’t tell you whether AI systems are citing you. You need new metrics, and as more SEOs are recognizing, success metrics are shifting from clicks and traffic to visibility and share of voice.

Start by systematically testing high-value queries across Google AI Overviews, ChatGPT, Perplexity, and Gemini. Note when your brand appears, how it’s described, and which sources get cited alongside you. 

Track share of voice in AI responses, how often your brand gets mentioned relative to competitors in AI-generated answers. If competitors are consistently appearing and you’re not, you’re losing the consensus battle regardless of how your rankings look.

Also monitor cross-domain mention density (how many unique domains reference your brand) and entity co-occurrence (how often your brand appears alongside relevant topics, competitors, and concepts). These give you a real picture of your consensus footprint and where the gaps are.

The new SEO playbook

The brands winning in AI-driven search aren’t necessarily the ones with the best content or the highest domain authority. They’re the ones building distributed credibility, authority that appears consistently across owned media, earned media, and community platforms.

As Google’s Danny Sullivan said, “Good SEO is good GEO.” The fundamentals haven’t disappeared, but they’re now table stakes, not differentiators. The new formula is: authority + consensus + distribution.

Integrate SEO, digital PR, and community engagement into one cohesive strategy. Building a distributed network of authority, mentions, citations, and community validation that takes time to construct, and is nearly impossible for competitors to dismantle overnight.

That’s the visibility moat worth building, and the clock is ticking.

Dig deeper: Content alone isn’t enough: Why SEO now requires distribution

Walmart: ChatGPT checkout converted 3x worse than website

ChatGPT Walmart

Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.

Why we care. This suggests agentic commerce isn’t ready to replace traditional shopping. Sending users to owned environments still drives higher conversion rates.

The details. Starting in November, Walmart offered about 200,000 products through OpenAI’s Instant Checkout. Users could complete purchases inside ChatGPT without visiting Walmart’s site.

  • Daniel Danker, Walmart’s EVP of product and design, said those in-chat purchases converted at one-third the rate of click-out transactions.
  • He called the experience “unsatisfying” and confirmed Walmart is moving away from it.

Goodbye, Instant Checkout. Instant Checkout was designed to let users complete purchases directly inside ChatGPT without visiting a retailer’s website. However, earlier this month, OpenAI confirmed it was phasing out Instant Checkout in favor of app-based checkout handled by merchants.

What’s changing. Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system.

  • A similar integration is coming to Google Gemini next month.

The WIRED report. Why Walmart and OpenAI Are Shaking Up Their Agentic Shopping Deal (subscription required)

What patents reveal about the foundations of AI search

The SEO time traveler- What old patents reveal about AI search

Every time a new large language model (LLM) drops or Google tweaks an AI Overview, the SEO industry loses its mind. We develop this weird collective amnesia, scrambling to optimize for features that were actually mapped out in patent offices 10 years ago. We’re so obsessed with the now and the next that we’ve stopped looking at the blueprints.

If you want to survive 2026, stop trying to be a futurist. Instead, be an archaeologist.

To actually deliver for our clients, we need a research framework that isn’t just reactive. It has to be a balance: Look back at the foundational patents to understand the rules, and look ahead to see how AI is finally being given the muscle to enforce them.

The archaeology of SEO

There’s a massive misconception that to understand AI search, you need to be a prompt engineer or read every new research paper from OpenAI. You don’t.  The logic governing today’s magic is often math that was written a decade ago.

We can’t talk about patent research without honoring the late, great Bill Slawski. For 20 years, he was the SEO industry’s archaeologist. While everyone else was arguing about keyword density, he was reading dry, technical filings to predict exactly where we’re standing right now.

History proves his method worked.

The algorithm isn’t magic. It’s math. When a new feature drops today, the engineering blueprints were likely filed between 2007 and 2016. If you want to win, go read the old stuff.

Dig deeper: The origins of SEO and what they mean for GEO and AIO

Strategy vs. mechanics: From ‘strings’ to ‘verified things’

Don’t get buried in buzzwords. Categorize your learning into two buckets: ”strategy” or ”mechanic.”

For years, the industry talked about moving from strings to things (entities). But in 2026, that’s just the baseline. We’ve moved from strings to verifiable things. An entity is worthless if the AI can’t prove it’s real.

Think of it like building a house:

  • Semantic SEO is the architecture: It’s the vision. It’s making sure the meaning of your site actually matches what the user is looking for.
  • Entity SEO is the bricklaying: It’s using distinct nouns to build that vision so a machine can parse it.
  • Verification is the mortgage: This is the part most people miss. It’s turning those entities into findable, provable facts connected to a verified human. If you aren’t connecting your content to a provable human expert, you’re just adding to the noise.

AEO vs. GEO: Let’s stop using these interchangeably

The industry often uses AEO and GEO synonymously, but they require different content structures and serve different objectives.

Answer engine optimization (AEO)

AEO is for the “direct answer.” Think Siri, Alexa, or that single snippet at the top of the page. It’s binary. It’s rooted in those 2006 fact repository patents.

You need ”confidence anchors.” These are unnuanced, structured facts. The engine isn’t “thinking,” it’s fetching. If your fact isn’t provable and anchored to a verified source, the engine won’t risk a hallucination by citing you.

Generative engine optimization (GEO)

GEO is for the “synthesis.” This is Gemini or ChatGPT search explaining how something works. It was formally defined by researchers at Princeton and Georgia Tech in 2023.

You need information gain. These engines don’t just want a fact; they want to see how Concept A affects Concept B. They’re looking for relationships and unique perspectives.

In short, AEO is about being the fact. GEO is about being the authority that the AI trusts to explain those facts.

Dig deeper: SEO, GEO, or ASO? What to call the new era of brand visibility in AI [Research]

Get the newsletter search marketers rely on.


The trap of forward-projecting: Why the ‘basics’ are still the ‘floor’

There’s a danger in becoming an SEO time traveler. If you spend all your time in the patent archives or stress-testing GEO relationships, you might forget that the AI still has to reach your content.

You can have the most verified, E-E-A-T-heavy content in the world, but if your site’s technical health is a mess, the confidence anchors will never weigh in.

The persistence of technical debt

Basic SEO requirements haven’t changed. The tolerance for ignoring them has simply disappeared.

  • Crawl budget and efficiency: If your site is bloated with zombie pages or redirect loops, you’re wasting the crawler’s time. LLMs aren’t just looking for content. They’re looking for the cleanest path to a fact.
  • Core Web Vitals (CWV): More than a ranking factor, it’s a user-utility requirement. If your site doesn’t load instantly, the AI won’t recommend it as a source in a GEO overview.

The headless promise (and reality)

Many of the frustrating technical SEO issues we’ve fought for years — like bloated JavaScript and poor Largest Contentful Paint (LCP) — are finally being solved by headless/composable architectures. By decoupling the front end from the back end, we can deliver the raw, lightning-fast data that answer engines crave while maintaining a high-end experience for humans.

But headless isn’t a “get out of SEO jail free” card.  It solves the speed problem, but it introduces new risks around dynamic rendering and metadata delivery.

Whether you’re on a 20-year-old CMS or a cutting-edge headless build, the today requirements are non-negotiable:

  • Clean URL structures: If the AI can’t deduce the hierarchy from the URL, you’ve already lost the semantic battle
  • Internal linking (the nervous system): This is how you prove relationships between entities. If your internal linking is broken, your synthesis logic doesn’t exist.
  • Indexability: If the bot is blocked by a poorly configured robots.txt or a noindex tag left over from staging, the most brilliant “verified human” insights in the world are invisible

You don’t get to play in the frontier of AEO and GEO until you’ve mastered the floor of technical SEO. Don’t let the shiny new objects make you forget the shovel work.

Dig deeper: Thriving in AI search starts with SEO fundamentals

The SEO time traveler checklist

Phase 1: The archive

  • The Slawski deep dive: Stop reading the latest “AI is changing everything” blog posts for five minutes. Go back to the SEO by the Sea archives. Search for Slawski’s analysis on the Knowledge Graph or the user context. You’ll see the 2026 roadmap hidden in plain sight.
  • The E-E-A-T math audit: Check your assets against Patent 2015/0331866. Are you actually providing the contribution metrics (such as verifiable reviews) that the patent specifically asks for?

Phase 2: The laboratory

  • The verification pivot: Audit your entities. Are they just names on a page? Link them to a verified LinkedIn profile or a Knowledge Panel. If it’s not verified, it’s not an entity, it’s just a string of text.
  • Schema stress testing: Don’t just use a plugin and walk away. Experiment with nesting. Try nesting a Person inside a Service as the provider. It works — I’ve seen it trigger rich results when nothing else did.

Phase 3: The frontier

  • The confidence anchor audit: Look at your top pages. Does every topic have a clear definition? [Entity] is [attribute]. If you’re being vague, you’re invisible to AEO.
  • The synthesis test: This is a quick one. Paste your article into an LLM and ask it to explain the relationship between your two main topics using only your text. If it has to go to the web to find the answer, you haven’t built the relationship well enough for GEO.

The synthesis: Becoming the architect

The SEO time traveler isn’t looking back because they’re nostalgic. They’re looking back because they want the blueprint. When you realize AEO is just the modern enforcement of a 20-year-old patent and GEO is just the evolution of semantic relationships, the chaos of AI updates disappears.

Stop optimizing for strings. Start optimizing for verified facts. Give the engine a fact it can’t doubt, connected to a person it trusts, and a relationship it can’t ignore.

The future of search wasn’t written this morning — it was written years ago. You just have to be the one to actually build it.

Dig deeper: The future of SEO: Why optimization still matters, whatever you call it

References and further reading 

On the evolution of fact-based search (AEO foundations)

On generative engine optimization (GEO foundations)

  • The GEO framework: Aggarwal, V., et al. (2023). GEO: Generative Engine Optimization. Princeton University, Georgia Institute of Technology, and the Allen Institute for AI. The definitive study on how LLMs cite and prioritize authoritative sources. 
  • The Slawski legacy: Slawski, B. (Various). SEO by the Sea Archives. For historical context on Agent Rank, phrase-based indexing, and entity metrics.

Why customer personas help you win earlier in AI search

Why customer personas help you win earlier in AI search

Buyers ask a question. You answer it clearly. That’s the premise behind the “They Ask, You Answer” (TAYA) framework, and it holds up in AI-driven discovery.

In theory, it’s simple. In practice, teams struggle to anchor their approach and get started. The result is predictable: generic questions that produce generic content.

That’s a problem, especially as AI shifts search behavior from short queries to more detailed, contextual questions. The difference comes down to the questions you choose to answer. And that’s where a simple concept makes a big difference: buyer personas.

The problem with generic questions

Odds are, you and many of your competitors have already answered these questions somewhere, or could easily.

The generic question trap happens because when marketing teams brainstorm content ideas, they often start with topics like:

  • What is CRM software?
  • What is marketing automation?
  • What is warehouse management?

These are reasonable questions. But they’re also questions no real buyer actually asks.

Real buyers ask questions that reflect their situation and their problem. Something more like this:

  • “What CRM should a 10-person sales team use?”
  • “Why are leads slipping through the cracks in our marketing?”
  • “Why is our warehouse picking speed so slow?”

The difference is subtle but important. The second set of questions includes a person and a problem. That context completely changes the quality of the content.

Why this matters more in AI-driven discovery

Instead of typing short keywords, buyers ask detailed, contextual questions:

  • “I run a 15-person marketing team, and we’re struggling to track leads properly. What should we do?”

The AI explains the problem, outlines solutions, and suggests vendors. In other words, the buyer is having a consultation with an AI.

If your content explains why a specific persona experiences a specific problem, you have a much better chance of shaping how that problem is understood in the first place.

This puts you into the conversation and consideration set earlier, making it more likely you’ll stay in as the user refines their thinking.

Consider this scenario. I’ll use myself as an example.

  • Marcus.
  • 50 years old.
  • Meeting some old friends in Birmingham, UK.
  • Looking for ideas of things to do for the day.

I start by asking a somewhat broad opening question:

  • “I’m looking for some ideas of things to do with friends in Birmingham on the weekend. I’m 50, and I have several male friends coming down to get together for a day. There will be some beers, no doubt, but we need some activities as well.”

Answers then include a bunch of top-level suggestions — bars, food, and activity-type bars. One of these suggestions is for an F1 gaming arcade. I like games, but not so much cars, so this leads my follow-up to dig in a bit more:

  • “Ah, we all like games. What about gaming arcades? What gaming arcades could you recommend?”

I get a bunch of recommendations, one of which is for a pinball arcade in Digbeth (a sub-area of Birmingham).

  • “Pinball Factory in Digbeth sounds fun. What else is there to do around there, food- and drinks-wise?”

I then get a set of responses that helps me narrow the list and formulate a perfect day and evening out for a group of old friends.

Being in the early part of the conversation lets you shape the dialogue and increases your chances of being part of the eventual solution.

Get the newsletter search marketers rely on.


Personas make TAYA far more precise

Personas are the tools that let you think like your customers and figure out the kinds of questions they ask long before they get to what you have to offer.

When you can identify a customer segment, you can dig into that persona, understand their problems and goals, and think like your target customer to generate content ideas that help them decide earlier.

Now, instead of writing content for a generic avatar, write for specific people. For example, instead of “Things to do in Birmingham?” you might write, “The best day out in Birmingham for a group of 50-year-old gamers.”

You’re still addressing the same underlying topic. But now the content speaks directly to a real person experiencing a real problem.

That shift usually leads to much more useful content. This helps you work your way into those conversations, rather than relying on the brutal battleground of commercial queries.

A simple way to uncover better questions

You don’t need a complicated persona framework to make this work. In most cases, a simple three-question exercise will uncover the kinds of problems your buyers are actually trying to solve. 

For each persona you serve, ask:

  • What are they responsible for? For example:
    • Hitting sales targets.
    • Generating marketing leads.
    • Running warehouse operations.
  • What problems make that responsibility difficult? Examples might include:
    • Missed sales targets.
    • Inefficient warehouse processes.
    • Poor lead tracking.
    • Slow picking speeds.
  • What would they ask Google or an AI assistant when that problem occurs?

Now the questions start to look very different. Instead of broad category topics like: “What is CRM software?”

You start to see questions like:

  • “Why are leads slipping through the cracks in our CRM?”
  • “What CRM should a small sales team use?”
  • “Why is our warehouse picking speed so slow?”

Those questions reflect real situations experienced by real people — exactly where the best content opportunities exist.

‘They Ask, You Answer’ works better with personas

Now we revisit the big five topic areas from TAYA: cost, problems, comparisons, reviews, and best-of. These topics already give us a powerful structure for content.

But when they’re approached generically, they often lead to content that looks exactly like everyone else’s.

So you can go from the typical, generic kinds of questions:

  • “How much does CRM software cost?”
  • “What problems do warehouse systems have?”
  • “HubSpot vs. Salesforce”
  • “Best CRM systems”
  • “Salesforce review”

To questions that are more connected to the needs of our target audience:

  • “What does CRM cost for a 10-person sales team?”
  • “Why do my warehouse managers struggle with picking accuracy?”
  • “HubSpot vs. Salesforce for a small B2B marketing team”
  • “Best CRM for growing sales teams”
  • “Is Salesforce worth it for a mid-size sales organization?”

The topic hasn’t changed, but the question now reflects the buyer’s reality. This shift produces more useful content and aligns with how people interact with AI assistants.

Those questions include their role, company size, or situation:

  • “We’re a small marketing team struggling to track leads properly. What CRM should we use?”

If your content already answers these persona-driven questions, you increase the chances that your explanation becomes part of that conversation.

In other words, personas don’t replace They Ask, You Answer. They make it more precise, moving you from answering generic topics to answering the exact questions buyers ask when solving a real problem.

Persona-driven questions improve TAYA content for three simple reasons.

  • They mirror how buyers actually think: People rarely search for textbook definitions. They search for solutions to problems. Personas keep the content anchored in those problems.
  • They produce more useful content: When you know who the content is for, it naturally includes better examples, more practical advice, and clearer explanations. In other words, content that genuinely helps someone move forward.
  • They align with how AI explains problems: AI assistants increasingly start by explaining the problem before recommending a solution. Content that clearly describes why a specific persona experiences a specific challenge fits neatly into this pattern. This increases the chances that your explanation influences the AI’s response.

Start with the problem, not the product

One of the most common mistakes companies make with content marketing is starting with their product.

But buyers rarely start their journey there. They start with a problem.

Personas help keep your content anchored in the buyer’s world rather than your own product — remember, it’s about the customer, not you.

And that simple shift often makes the difference between content that merely exists and content that actually influences decisions.

Where you enter the conversation matters

“They Ask, You Answer” remains one of the most powerful frameworks available to marketers. But the effectiveness of the framework depends entirely on the quality of the questions you answer.

Personas help you turn vague topics into real problems and ask better questions. When your content speaks directly to those problems, buyers and AI systems are far more likely to trust your answers.

5 competitive gates hidden inside ‘rank and display’

ARGDW- 5 competitive gates hidden inside ‘rank and display’

If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.

The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.

The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently. 

A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”

The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.

Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.

The competitive turn: Where absolute tests become relative ones

The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.

In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward. 

The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.

At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?” 

Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.

You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.

The DSCRI ARGDW pipeline- Where absolute tests become relative

Multi-graph presence as structural advantage in ARGD(W)

The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.

The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.

This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.

Recruitment (Gate 6)- One piece of content, three separate knowledge structures

For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph. 

Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).

Annotation: The gate that decides what your content means across 24+ dimensions

Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.

At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.

Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.

Canel confirmed the Annotation gate back in 2020 on my podcast as part of the Bing Series, in the episode “Bingbot: Discovering, Crawling, Extracting and Indexing.

  • “We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
  • “My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”

So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.

Annotation classification runs across five types of specialist models operating simultaneously per niche: 

  • One for entity and identity resolution (core identity).
  • One for relationship extraction and intent routing (selection filters).
  • One for claim verification (confidence multipliers).
  • One for structural and dependency scoring (extraction quality).
  • One for temporal, geographic, and language filtering (gatekeepers). 

This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.

Annotation (Gate 5)- How the system classifies your content

Gatekeepers 

They determine whether the content enters specific competitive pools at all:

  • Temporal scope (is this current?).
  • Geographic scope (where does this apply?).
  • Language.
  • Entity resolution (which entity does this content belong to?). 

Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.

Core identity

This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment. 

For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.

Selection filters 

They add query routing: intent category, expertise level, claim structure, and actionability. 

For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.

Extraction quality

Think:

  • Sufficiency (does this chunk contain enough to be useful?)
  • Dependency (does it rely on other chunks to make sense?)
  • Standalone score (can it be extracted and still work?)
  • Entity salience (how central is the focus entity?)
  • Entity role (is the entity the subject, the object, or a peripheral mention?)

Weak chunks get discarded before competition begins.

Confidence multipliers 

These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.

Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.

An important aside on confidence

Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.

Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.

Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.

To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.

What happens when annotation fails you (silently)

Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.

I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.

Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version. 

The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.

When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.

Measuring annotation quality in ARGDW

Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.

The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.

That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”

Your brand SERP tells you exactly what the algorithm understood

These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.

  • Brand SERP shows incorrect entity associations: wrong competitors, wrong category, wrong geography.
  • AI résumé is noncommittal, hedged, or incomplete.
  • AI outputs underestimate your NEEATT credentials.
  • Knowledge panel displays incorrect information.
  • AI describes your brand using a competitor’s framing or category language.
  • Entity type is misclassified (person treated as organization, product treated as service).
  • AI can’t answer basic factual questions about your brand and offers without hedging.

If the algorithm can’t place you in a competitive set, it won’t recommend you

These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.

  • Absent from “best [product] for [use case]” results where you qualify.
  • Absent from “alternatives to [competitor]” results.
  • Absent from “[brand A] vs. [brand B]” comparisons for your category.
  • Named in comparisons but with incorrect differentiators or misattributed features.
  • Consistently ranked below competitors with weaker real-world authority signals.

For me, that last one is the most telling. Weaker brand, higher placement.

Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.

If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent

These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations. 

The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.

  • Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
  • Not surfaced when the AI explains a concept you coined or own.
  • Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
  • Named as a generic example rather than a recommended solution.
  • The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
  • Entity present in the knowledge graph but invisible in discovery queries on AI platforms.

The three taxes you’re paying with sub-optimal annotation

Three revenue consequences follow from annotation failure, one at each layer of the funnel. 

  • The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer. 
  • The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you. 
  • The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you. 

Each tax is a direct read of how well annotation worked — or didn’t.

For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as: 

  • BoFu failures point to entity-level misunderstanding. 
  • MoFu failures point to competitive cohort misclassification.
  • ToFu failures point to topic-authority disconnection.

Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”

For the full classification model in academic depth, see: 

Recruitment: The universal checkpoint where competition becomes explicit

Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.

Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”

The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction. 

The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).

The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.

The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.

The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.

Recruotment (Gate 6)

The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments: 

  • Search results are daily to weekly.
  • Knowledge graph updates are monthly. 
  • LLM updates are currently several months (when they choose to manually refresh the training data).

Grounding: Where the system checks its own work in real time

Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.

Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary. 

The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.

In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer. 

If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).

But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.

The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.

Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.

My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.

The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.

In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.

Get the newsletter search marketers rely on.


Display: Where machine confidence meets the person

Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).

Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.

This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.

UCD activates at display

You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.

The same content, grounded with the same confidence, presents differently depending on who is asking and why.

A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.

A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.

A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.

The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.

This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.

The framing gap at display

The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.

  • At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics. 
  • At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames. 
  • At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.

After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.

Won: The zero-sum moment where one brand wins and every competitor loses

Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses. 

The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.

Three won resolutions in the competitive context

Won always resolves through three distinct mechanisms, each with different competitive dynamics.

Resolution 1: Imperfect click

  • The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone. 
  • This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”

Resolution 2: Perfect click

  • The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment. 
  • This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.

Resolution 3: Agential click

  • The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint. 
  • The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.

The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure. 

Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to. 

Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.

Competitive escalation across the five ARGDW gates

The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.

Competitive narrowing
  • The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
  • Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
  • Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
  • Display reduces to finalists, often one primary recommendation with supporting alternatives.
  • Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.

ARGDW: Relative tests. The scoreboard is on.

Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.

  • Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
  • Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
  • Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
  • Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
  • Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).

After establishing the 10-gate AI engine pipeline, what’s next?

The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.

Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).

Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.

My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.

I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”

People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.

The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.

This is the fifth piece in my AI authority series. 

Why social search visibility is the next evolution of discoverability

While everyone focuses on AI search, the real opportunity may be social search

Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.

But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.

Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.

Search behavior is diversifying

Recent research from SparkToro and Datos analyzed search behavior across 41 major platforms, including traditional search engines, ecommerce platforms, social networks, AI tools, and reference sites.

The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.

While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.

The research suggests search activity is roughly distributed as follows:

  • Traditional search engines: ~80% of searches, with Google alone at ~73.7%
  • Commerce platforms (Amazon, Walmart, eBay): ~10%
  • Social networks: ~5.5%
  • AI tools (ChatGPT, Claude, etc.): ~3.2%

Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

The industry is focused on AI and missing the bigger mainstream shift

Much of the search industry conversation today is focused on AI. Questions like:

  • How do I rank in ChatGPT?
  • How do I optimize for AI search?
  • Will AI replace Google?

They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.

I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.

AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.

But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:

  • Amazon receives more searches than ChatGPT.
  • YouTube receives more searches than ChatGPT.
  • Even Bing receives more search activity.

Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.

Social platforms are now search engines

For many users, social platforms are now core search destinations. People look to:

  • TikTok for recommendations, restaurants, travel ideas, and products.
  • YouTube for tutorials, reviews, and problem-solving.
  • Reddit for honest discussions and community opinions.
  • Pinterest for inspiration and visual discovery.

Each platform plays a different role in the discovery journey.

PlatformWhat people search for
TikTok/InstagramDiscovery and recommendations
YouTubeLearning, tutorials, and reviews
RedditReal opinions and community discussions
PinterestInspiration and planning

These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.

Social content is now appearing directly in Google results

As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.

Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.

Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:

  • Direct searches on social platforms.
  • Visibility within Google search results.
  • Influence within AI-generated answers.

Dig deeper: Social and UGC: The trust engines powering search everywhere

Get the newsletter search marketers rely on.


Social content is also powering AI search

Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.

That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.

Google’s AI Overviews often reference Reddit threads and YouTube videos.

Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.

This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.

A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.

The compounding discoverability effect

When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:

  • Rank in YouTube search.
  • Appear in Google search results.
  • Be referenced in AI-generated answers.
  • Be shared across social platforms.
  • Spread through private messaging and dark social channels.

Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.

And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Most brands still follow the old search playbook

Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.

Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.

While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.

When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.

Search everywhere: A new model for discoverability

Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.

Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.

Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.

That is the future of search. That is “search everywhere.”

Dig deeper: ‘Search everywhere’ doesn’t mean ‘be everywhere’

❌