Google’s Preferred Sources now supports all languages, not just the English language. “Preferred Sources is now rolling out globally in all supported languages,” Google wrote on its blog this morning.
“This feature gives you more control over the news you see on Search by letting you choose the outlets and sites you want to appear more often in Top Stories,” Google added.
In December, Google rolled out preferred sources globally but it only supported English. Now it supports all languages globally as well.
Stats. Google added some interesting data including:
“Readers are twice as likely to click through to a site after marking it as a Preferred Source”
“People have already selected over 200,000 unique sites — from niche local blogs to global news desks”
Preferred Sources. Preferred Sources let searchers star publications in the Top Stories section of Google Search, and Google uses that signal to show more stories from those starred outlets. The feature entered beta in June, rolled out in the U.S. and India in August, and is now expanding globally.
How it works. You click the star icon to the right of the Top Stories header in search results. After that, you can choose your preferred sources – assuming the site is publishing fresh content.
Google will then start to show you more of the latest updates from your selected sites in Top Stories “when they have new articles or posts that are relevant to your search,” Google added.
Why we care. Traffic from Google Search is hard and if you can get your readers, loyal readers, to make your site a preferred source, that can help. Google said those users are twice as likely to click, which can help drive more traffic.
So add the preferred source icon to your site and encourage users to sign up. You can make Search Engine Land a preferred source by clicking here.
The difference between a 2% margin and a 20% margin increasingly comes down to whether you’re renting attention or owning the answer.
For years, search rewarded the ability to buy visibility. That model is weakening.
As AI systems increasingly resolve queries without a click, the value shifts from traffic acquisition to answer formation.
When you move from buying clicks to engineering answers (i.e., structuring content so it can be surfaced, cited, and trusted by AI systems), you change what you own. Instead of renting placement, you build answer equity: durable inclusion in the outputs that shape decisions.
The goal isn’t to turn off paid search. It’s to stop relying on it as your primary source of demand. Over time, this can lower acquisition costs and reduce volatility, because you’re not competing for every impression.
An atomic sandwich
To operationalize this shift, you need a content structure that maximizes what AI systems can extract. Think of it as an “atomic sandwich.”
An atomic sandwich content structure shifts the focus from chasing traffic to maximizing intent density. Here’s how:
The atomic fact (top bun)
Most organizations treat their search budget like a high-interest payday loan.
You keep pouring cash into the paid bucket for that immediate hit of traffic, and it feels like you’re winning.
But the moment you stop feeding the meter, your brand disappears.
The forensic proof (the meat)
For many organizations, this isn’t just marketing inefficiency — it’s an organizational risk.
In the emerging Answer Economy, your rented audience is evaporating. Data from Seer Interactive (Sept 2025) shows paid CTR on informational queries has dropped 68% when Google’s AI Overviews are present.
You’re not just paying for clicks. In many cases, your paid traffic contributes to awareness that AI systems can later satisfy without requiring a click.
The structural directive (bottom bun)
The “box” has changed.
Here’s the structural leak in your balance sheet: to survive 2026, you must stop buying a crowd and start engineering the answer.
If your brand isn’t among the trusted sources behind the machine’s answer, your visibility — and influence — shrinks significantly.
The new “box”: From librarian to forensic auditor
We’ve moved from a search engine that directs users to a generative engine that validates information. Every dollar you spend on ads to cover a lack of E-E-A-T is money you’re burning.
The data is clear: appearing in search results is no longer a viable model on its own.
The organic collapse: A SISTRIX (March 2026) analysis found that when an AI Overview is present, position 1 CTR drops from 27% to 11% — a 59% decline.
The global impact:Ahrefs (Dec 2025) found AI Overviews correlate with a 58% lower average CTR for the top-ranking page.
The goal is no longer just to rank in search, but to be consistently included among the sources AI systems rely on.
Without trust, you’re paying for ghost impressions.
In the old box, you could survive by being loud. In the new box, you survive by being certain.
The search addiction cycle (why your org can’t quit)
Most companies are in organizational denial.
You see the cost of rented clicks rising and quality falling, but you’re too afraid to stop because you’ve neglected your information architecture and have no foundation. That’s a balance sheet liability.
Stage 1 — the vanity hit: early paid search wins made you feel like a genius. You mistook traffic volume for business health.
Stage 2 — tolerance building: As the Answer Economy evolved, keywords got more expensive. Instead of fixing structural integrity, you upped the dose.
Stage 3 — the context-debt overdose: You’re paying for zombie facts — content an AI can summarize in seconds. Zero-click searches have surged to 69%. Your expensive awareness is consumed for free by AI.
Stage 4 — total dependency: Your marketing manager becomes a budget operator rather than a builder of durable demand. They aren’t building answer equity; they’re managing cash transfer to Google.
The forensic intervention: The 7-point organizational health check
Use this checklist in your next review to find where your Answer Equity is leaking.
The Information Gain test: Ask Gemini to summarize your page. If it adds nothing beyond common results, you’re in violation of Google’s Information Gain patent. You have a zombie fact with zero value.
The entity audit: Does your brand have a verified Google Knowledge Graph ID? Without it, you’re not an asset — you’re just text.
Source of ground truth: Are you cited in AI Overviews? BrightEdge (Sept 2025) shows that without a citation, your visibility is effectively zero.
The faucet test: If you cut PPC spend by 20%, does lead volume drop 20%? If so, you have no foundation — you’re renting revenue.
Schema and provenance: Are you using Schema.org/Person to link experts to your brand? Unverified content is untrusted noise to a retriever.
The “meat” ratio: Review your top 10 posts. Do they include primary research? If not, they’re fodder for the AI’s top bun with no reason to click.
Machine-readable graph adoption: Is your team moving toward W3C RDF-star (RDF 1.2) or ISO/IEC GQL standards? These are the 2026 blueprints for verifying Answer Equity.
The recovery plan: From rented clicks to owned authority
1. Purge the zombie facts (the information gain protocol)
Stop rewarding word count. Every piece of content must deliver a “meat” layer — information gain a retriever can’t synthesize from the rest of the web. That’s how you reclaim your margins.
2. Build your “E-E-A-T engine” (the trust infrastructure)
Stop treating schema as a technical extra. It’s your trust score on the digital exchange. Ensure your authors have strong provenance so AI retrievers can instantly crawl and confirm your expertise.
3. Measure ‘intent density’ (the scoreboard shift)
If your traffic drops but lead quality holds, you’re winning. Focus on users who bypass the summary because they need the deep, forensic expertise only you provide.
The shift from renting an audience to owning the answer is the most significant strategic pivot your organization will make this decade. It moves you from a marketing expense to a balance sheet asset.
The paid trap offers a temporary high but leads to a fiscal dead end. Every dollar spent there is consumable — used once and gone when the auction ends.
When you move that capital into your information infrastructure, you stop paying for the privilege of being ignored. You start building a digital entity that owns its facts, earns trust, and controls its future in the Answer Economy.
Your first step: don’t boil the ocean.
Take your top-performing paid landing page and run the seven-point health check. If it’s a “zombie fact” environment, engineer information gain back into the page.
Stop asking for a ranking report; start asking for an entity audit.
The 2026 organization isn’t defined by how much it spends to rent an audience, but by how much it proves it owns the answer.
You have the blueprints. You have the data. Now stop funding the payday loan and start building answer equity.
Across 90 prompts we tested in ChatGPT, commercial prompts triggered web searches 78.3% of the time. Informational prompts did so just 3.1%.
That gap changes what you should write if you want to appear in a ChatGPT answer.
ChatGPT doesn’t pull every response from the same place. Some answers come from training data; others use live web search — a behavior called query fan-out. The model expands your prompt into multiple background searches, then retrieves and synthesizes across those subtopics. If your page isn’t on those branches, it won’t be pulled in.
So the question is no longer just how to rank. It’s which pages open the fan-out door in the first place.
In our sample, informational pages didn’t. Read on to discover where the system went instead.
We tested 90 prompts across three industries: beauty, legaltech/regtech, and IT. We analyzed prompt intent, downstream query expansion, and the intent those expansions reflected.
Here’s the breakdown and the core finding: most queries aligned with commercial intent, not purely informational prompts.
Why this question matters now and how query fan-outs come into play
Query fan-outs change the content game because the system isn’t limited to the literal prompt.
It expands the request into multiple background searches, then retrieves and synthesizes across those subtopics.
Fan-outs trigger parallel web searches tied to the initial prompt, creating opportunities for retrieval, mention, and link citation.
Multi-query expansion is a core design pattern in modern generative search systems. Google describes AI Mode this way: it breaks a question into subtopics, searches them in parallel across multiple sources, then combines the results into a single response.
That raises a strategic SEO question: should you invest more in top-of-funnel educational content, or in lower-funnel comparison, shortlist, and recommendation content?
This experiment framed that problem.
The objective was to test, across selected industries, where fan-out appears by intent category: informational, commercial, transactional, or branded.
The initial hypothesis was direct: informational prompts wouldn’t trigger fan-out, while commercial prompts would, and those fan-outs would stay at the same funnel level or move lower.
We found that ChatGPT-generated fan-outs are overwhelmingly associated with commercial intent.
Disclaimer: This experiment measures observed prompt expansion behavior in ChatGPT. Google AI Mode is cited only as context to show multi-query expansion as a broader pattern in generative search, not as proof of ChatGPT’s internal architecture.
The setup: what we tested
The core sample includes 90 numbered prompts, heavily weighted toward informational intent.
Prompt intent
Prompts
Share of sample
Prompts with fan-out
Fan-out rate
Informational
65
72.2%
2
3.1%
Commercial
23
25.6%
18
78.3%
Branded
1
1.1%
0
0.0%
Transactional
1
1.1%
0
0.0%
The sample skews heavily toward informational prompts, with some commercial ones and minimal branded and transactional queries.
We structured the experiment around the sectors in the brief: beauty/personal care, legaltech/regtech, and IT/tech.
The result: commercial prompts triggered almost everything
The main finding is clear.
Out of 90 prompts, 20 triggered fan-out. Of those, 18 were commercial and 2 informational.
Informational prompts made up about 10% of fan-out triggers (2 of 20). When they did trigger expansion, they were rewritten into more evaluative, solution-seeking subqueries.
In other words, 90% of fan-out-triggering prompts in the core sample came from commercial intent.
The contrast is stronger than the raw totals suggest. Commercial prompts triggered fan-out 78.3% of the time; informational prompts did so just 3.1%.
This supports the working hypothesis: in this sample, fan-out was overwhelmingly a commercial phenomenon.
Those 20 prompts produced 42 fan-out queries — an average of 2.1 per triggered prompt.
Of those 42 fan-out queries:
39 were commercial.
2 were branded.
1 was informational.
Even when a prompt triggered expansion, the system usually shifted toward comparison, product evaluation, feature filtering, shortlist creation, or brand-specific exploration — not broad educational discovery.
Methodology: how we performed the analysis
The experiment used 90 prompts across three industries, mostly informational, with a smaller set of commercial prompts and minimal branded and transactional queries.
In the analysis, we have:
Selected a representative battery of prompts.
Identified the fan-outs.
Classified each fan-out by intent.
Observed distribution by prompt metadata.
The analysis then followed three steps:
Each prompt was classified according to prompt-intent labels.
We counted the prompts triggering fan-out (at least one).
We inspected the observed expansion queries and their assigned fan-out intent labels.
That produced two distinct but complementary views:
A prompt-level view, asking whether a given prompt triggered fan-out at all.
A fan-out-query view, asking what kind of intent the downstream expansion actually took.
That distinction matters: the first shows which prompts open the fan-out path, while the second shows where the system goes once it opens.
Interpreting the results: fan-out tends to move down-funnel
The cleanest interpretation is that, in this sample, fan-outs behave less like open-ended topic expansion and more like assisted decision support.
Commercial prompts almost always opened the door.
Once they did, fan-outs usually stayed commercial.
The system expanded into comparisons, feature-based filtering, product lists, pricing-adjacent queries, and brand-specific evaluations.
A few examples make that concrete.
“Suggest the best accounting software for small business and explain why” expanded into a commercial comparison query around features.
“What are the top AI document management systems for lawyers?” expanded into multiple product-oriented legaltech queries.
“What are the best products for skin care?” expanded into a shortlist-style query around product categories and reviews.
The two informational exceptions are even more revealing than the rule.
“I need an open-source document management system. What can you suggest?” was labeled informational at prompt level, but the resulting fan-out moved into solution recommendation.
“AI tools for legal research and document automation” also moved into a clearly commercial/evaluative downstream query.
So, even when the prompt starts broad, fan-out often translates that breadth into a lower-funnel retrieval path.
What this means for content strategy
The takeaway isn’t to stop writing informational content.
It’s this: informational content alone is unlikely to align consistently with fan-out expansion, at least in this dataset.
If your goal is visibility in AI answers tied to product selection, vendor discovery, or option narrowing, you need stronger coverage of pages and passages that match those downstream commercial branches.
In practical terms, your content model shouldn’t be just ToFU or BoFU, but ToFU with commercial bridges.
A broad article can still help, but it should include passages the system can easily reformulate into decision-support subqueries.
A purely educational piece that explains a category without naming products, tradeoffs, features, use cases, pricing logic, or selection criteria is much less likely to align with the fan-out paths seen here.
Put simply: Don’t just answer the obvious question — anticipate the next evaluative step the system is likely to generate in the background.
Limitations
This result is directional, not universal.
90 prompts reveal a pattern, but not a stable law of AI retrieval behavior.
The prompt mix is uneven. Informational prompts dominate the sample, while branded and transactional prompts are barely represented. That means those findings aren’t proof of absence.
The dataset spans industries but isn’t normalized by brand, wording style, or use case. Some sectors may be easier to express in product-discovery language.
This is an observational analysis of recorded fan-outs, not a controlled platform-level test. It shows what happened in this prompt set, not how ChatGPT always behaves.
Google’s description of fan-out provides context, but this isn’t a Google AI Mode test. It’s a ChatGPT-focused prompt and fan-out dataset. The takeaway is strategic, not architectural.
What to test next
The next version of this experiment should isolate the question more aggressively and expand the dataset.
A follow-up should map triggered fan-outs back to specific content formats.
The goal isn’t just to confirm that commercial intent wins. It’s to identify which page templates and passage structures best cover the fan-out branches AI systems prefer.
I keep hearing people say AI understands their brand. It doesn’t. Let’s get that out of the way first.
What it does is pattern-match at scale. It compresses your positioning, product, proof, and tone into a bundle of signals it can retrieve and remix at speed.
Those patterns come from two places:
Training: What the model absorbed historically.
Retrieval: What it can fetch at answer time from the live web and other sources.
So “AI SEO” isn’t a new channel. It’s a new representation problem: which version of your brand gets encoded, retrieved, and repeated.
Most brands are already in the game. They’re just not playing with purpose.
The internet is no longer a library
Classic SEO was a library problem. You publish a URL. Google indexed it. A human searched and found it.
AI search is a conversation that stretches out the demand curve. Head terms still drive the majority of visibility, but, ever so slowly, more volume is moving into context-heavy prompts.
“With these constraints”
“Like this competitor but cheaper”
“Which tool fits a team like mine with these requirements?”
“Given what you know about me, recommend…”
Your job is to be the most relevant match inside a model’s memory and retrieval pipeline.
Not by being ranked. But by being represented.
AI doesn’t run on opinions. It runs on associations.
From keywords to entities to embeddings
Classic SEO competed for keywords. Then it shifted to entities. AI systems go one layer deeper. They turn entities into vectors.
Your brand becomes a coordinate in dimensional space. Close to some concepts. Distant from others. Pulled by whatever your content and mentions repeatedly associate you to.
If your brand is consistently associated with “enterprise analytics”, “real-time dashboards” and “data governance”, your vector lives near those clusters.
If your messaging sprawls into adjacent territory because someone got bored of writing about the same things, the vector spreads. Precision drops. The model still has a position for you. It’s just fuzzier, less confident, and easier to swap for a competitor with cleaner signals.
Three layers of AI brand visibility
Before you “fix AI SEO,” identify which layer your brand is failing on. The same tactics don’t work everywhere.
Training layer
Your historical footprint. Press, blogs, documentation, reviews, every old thread on a forum you forgot existed.
You can’t fully control it.
But you can reduce fragmentation by finding and editing all possible past mentions (social profiles, directory listings, wikis, etc) to create a consistent identity across the internet.
Understand the training layer by asking an AI chatbot to describe your brand with web search turned off.
Retrieval layer
Your live surface area. Indexed pages, product feeds, APIs. This is where traditional technical SEO of crawling, indexing and rendering matter most. It defines what the AI system can access for citations.
Understand the retrieval layer by running branded intent and market category intents prompts daily using a LLM tracker and reviewing which sources are consistently cited.
Generation layer
That is the output seen in AI Overviews, AI Mode, ChatGPT or whatever your brand gets reassembled in front of an actual customer. Your brand will be written into the answer only if it’s a must.
So ask yourself, what unique, quotable, additive content forces the LLM to mention you?
Understand the generation layer by using the same LLM tracker data, but reviewing brand mentions within responses and their semantic associations.
Four mechanics that decide what AI says
Think of these as the forces quietly shaping your representation across the layers.
1. Consolidation (identity resolution)
AI systems merge different references to the same brand if it’s obvious they belong together.
Most brands don’t have one clear identity. They often have:
A brand name (spaced or cased inconsistently).
A legal name.
A domain name.
An abbreviation.
A legacy name.
Humans merge that automatically. Models don’t. They consolidate by pattern, not intent. Every inconsistent self-reference is a vote for fragmentation.
Allow your brand to be written five different ways and split your visibility signals five times.
2. Co-occurrence (association formation)
Models learn what appears together:
Brand + category
Brand + use case
Brand + audience
Brand + competitor
Repeat the right pairings, and the association strengthens. Be inconsistent, and it weakens. It’s genuinely that simple.
3. Attribution (who says it, where)
Models track who is being described, by whom, in what context.
Your own site is one layer. Third-party mentions are another. High-trust sources carry more weight.
Not because of “authority” in the classic SEO sense, but because they appear frequently inside reliable contexts in the training data and retrieval corpora. Similar outcome. Different mechanisms.
4. Retrieval weighting (what gets used in AI answers)
When generating answers, AI systems decide which information to use. That decision depends on clarity, relevance, uniqueness, and ease of extraction.
If key facts are buried in narrative copy, implied through metaphor, scattered across sections, the model will simply pull from somewhere else.
On the other hand, if you repeat them, structure them, and make them explicit, you are more likely to be chosen by the model.
You’re not writing poetry, you’re building a graph
In your content, on-page and off-page, make the core entities unmissable. Your brand. Your products. Your categories. Your audience. Your differentiators.
Craft a clear, consistent, canonical positioning that the machine can’t misread by creating a canonical brand bio:
[Brand] is a [market category] for [audience] who need [use case], differentiated by [proof].
Then, honestly ask yourself if your answer could also describe your competition. Or better, ask AI that question. If the answer is yes, rewrite it’s unmistakably you.
Then roll out that positioning everywhere. On-page with “retrieval-ready” chunks, in structured data, in “sameAs” references, industry publications, partner sites, user reviews, community discussions, social posts.
Repeat key associations deliberately across pages until it feels excessive. Reduce unnecessary variation in terminology. Then the associations strengthen. Are reinforced. Compound.
Beware brand drift,where inconsistencies allow misrepresentations, and a lack of information allows hallucination to creep in. Police all the edges. Consolidate or kill the pages that introduce conflicting descriptions of your brand.
This is not about gaming AI. It is about reducing entropy.
If that sounds boring, good. The brands that win the AI era are not going to win it with cleverness. They are going to win it with discipline.
Because if answers are inconsistent across sources, your brand won’t be cleanly encoded. And the version of you that AI systems are quietly passing along to customers won’t be the one you intended.
First 5 steps to AI brand visibility
Write your canonical brand bio: Lock-in spacing, casing, abbreviation rules for the brand name, and clear positioning.
Implement graph-based schema: Define relationships between your brand (consolidated by sameAs) and other key entities.
Make proof easy to quote: Ensure awards, benchmarks, customer numbers, policies, all notable brand information is explicit and extractable.
Fix historical identity fragmentation: Clean up past mentions and enforce canonical positioning everywhere possible.
Repeat key associations with intention: Brand + category, use case, audience, vs competitor. Not only on your own site, but also build coverage on high-trust third parties.
It’s not about you
If AI systems can’t confidently represent your brand, they will default to a safer option. Usually, it’s a competitor with cleaner signals. Not because that competitor is “better”. Because that competitor is easier for the machine to use.
AI doesn’t need to understand your brand perfectly. It needs to approximate it well enough to recommend you. Your job is to control that approximation through consistency, structure, and distribution.
Not by publishing more. By making your brand impossible to misunderstand.
Google is doubling down on AI-driven ads just as search behavior shifts toward conversational queries, giving advertisers more automation while trying to preserve control.
What’s new.
AI Max expands beyond Search: Now rolling out to Shopping campaigns and travel-specific formats, broadening reach across more advertiser types.
AI Brief (powered by Gemini): A new interface that lets advertisers steer AI using natural language inputs.
Text disclaimers + URL automation: Compliance-friendly updates to pair with automated landing page selection.
Why we care. Google is making AI Max a core layer across Search, Shopping and Travel, meaning automation will increasingly determine how ads are matched to user intent. This update expands reach into more conversational, high-intent queries that traditional keyword strategies miss, helping brands capture demand earlier in the journey.
At the same time, tools like AI Brief and new compliance features give advertisers more control over messaging and targeting, reducing the risk of fully automated campaigns feeling like a “black box.”
Shopping gets smarter. For retailers, AI Max for Shopping uses Merchant Center data to generate more adaptive ads that can respond to long-tail and exploratory queries, helping brands appear earlier in the discovery phase rather than only at the point of purchase. The rollout is positioned as a simple upgrade for existing Shopping campaigns, suggesting Google wants rapid adoption.
Travel gets consolidated. Travel advertisers get a consolidation play. Search Campaigns for Travel bring previously fragmented formats into a single interface with unified reporting and integrated AI Max capabilities. The move reduces operational complexity while reinforcing Google’s push toward centralized, AI-driven campaign management.
More control with AI Brief. The most notable addition is AI Brief, which attempts to solve a long-standing advertiser concern: lack of compliance control in automated systems. Advertisers can define messaging rules, specify which queries to prioritize or avoid, and shape how different audiences are addressed. The system then generates previews, allowing feedback before campaigns go live.
Automation meets compliance. Google is refining how traffic is directed to websites. Final URL expansion uses AI to select the most relevant landing page for each query, and the new text disclaimer feature ensures required legal messaging remains intact even when automation is active. This signals a push to make AI usable in more regulated industries without sacrificing compliance.
AI may not see your brand the way you think it does, according to Scott Stouffer, co-founder and CTO at Market Brew.
Brands still publish content, optimize pages, build authority, and follow SEO best practices. But that may not be enough anymore.
Search has moved away from a simple battle over keywords, links, and page-level signals. It’s now shaped by meaning, intent, embeddings, and retrieval, Stouffer said during his SEO Week presentation.
In legacy SEO, a page could rank lower and still exist in the search results. In AI-driven systems, the first question isn’t whether you rank. It’s whether you’re ever retrieved.
“If you’re not retrieved, you do not exist to AI,” Stouffer said.
Your brand already exists inside AI systems as a mathematical object. You may call yourself one thing. Your homepage may say another. Your brand guidelines may promise a clear position. But AI systems build their own view of your brand from the content you have published.
That computed version of your brand may be different from the one you intended to build.
Retrieval now matters before ranking
AI visibility begins before ranking, Stouffer said.
In traditional SEO, marketers focus on positions — first, third, or tenth. But AI systems apply a filter earlier. Before anything is ranked, the system determines which content is eligible for consideration.
That is retrieval.
When a user asks a question, the system pulls a limited set of passages or chunks that best match the query. Those passages define the answer space.
If your content isn’t included, you get no impressions, no clicks, and no visibility at all, Stouffer said.
The real shift is moving from exclusion to inclusion.
“You don’t lose. You just never entered the game,” Stouffer said.
AI does not see pages the way SEOs do
AI systems don’t treat a webpage as one clean unit, Stouffer said. They don’t evaluate pages as whole objects or prioritize layout, structure, or formatting.
Content is broken apart. A page becomes chunks: passages, sections, and individual ideas.
Each chunk is evaluated independently. A paragraph deep in a guide can compete on its own. A single sentence can be selected if it aligns closely with the query.
This shifts competition from page versus page to passage versus passage.
Most of a page may never be considered. Only the most aligned chunks are evaluated.
Meaning becomes math
Each chunk is converted into a vector, Stouffer explained.
This vector represents meaning as a position in a high-dimensional space. It captures context and intent rather than exact wording.
Two pieces of content can use different words but sit close together if they express the same idea. Others can share keywords, but sit far apart if they represent different meanings.
“It’s comparing meaning, not wording, measuring distance, not keyword overlap,” Stouffer said.
Relevance is determined by proximity. The closer a chunk is to a query in this space, the more likely it is to be retrieved.
Your content forms clusters
As chunks are mapped into this space, they group together.
Content with similar meaning forms clusters, even across different pages. These clusters reflect how AI systems understand topics.
This understanding comes from how content naturally groups by meaning, not by site structure or labels, Stouffer said.
If content is consistent, clusters become dense and clear. If content is scattered, clusters become fragmented.
What matters is not what a brand intends to say, but what its content actually communicates.
The centroid is your brand to AI
Within these clusters, there is a center point — the centroid, Stouffer said.
The centroid represents the average position of all related content. It reflects the site’s core meaning.
Every page and paragraph influences that position. Consistent content creates a clear, stable centroid. Inconsistent content dilutes it.
That centroid is how AI understands your brand.
Not your homepage. Not your messaging. Not your brand guidelines.
Your centroid is the combined signal of everything you have published, Stouffer said.
“Your centroid doesn’t care about intent. It reflects the math of everything you’ve ever published,” Stouffer said.
Alignment beats isolated optimization
This changes how content should be evaluated.
The key question isn’t whether a page is optimized in isolation. It’s whether it aligns with the rest of the site.
Each page either strengthens the centroid or pulls it in a different direction.
“Optimization without alignment creates drift, and drift is what breaks consistency,” Stouffer said.
As drift increases, the site becomes harder for AI systems to interpret and retrieve.
“You don’t write pages, you project meaning,” Stouffer said.
Retrieval starts with proximity
When a query is entered, the system converts it into a vector, Stouffer said.
It then searches for the closest matches in meaning space.
This includes both individual chunks and the centroids that represent broader content clusters.
If your content is close enough, it enters the candidate set. If it is too far away, it is excluded.
Only after this stage do traditional ranking signals apply.
Content quality, links, and structure matter — but only if the content is first retrieved.
If not, those signals are never evaluated, he said.
Most brands look too similar to AI
Many brands follow similar strategies, use the same sources, and produce similar content.
As a result, their centroids converge in the same region, Stouffer said.
He described this as cluster collision.
When multiple brands occupy the same space, AI systems don’t select all of them. They choose a few and ignore the rest.
“They’re not failing best practices. They’re colliding with everyone else using them,” Stouffer said.
Distinct meaning is the new advantage
Producing more content or improving existing content isn’t enough. If content remains similar in meaning, it remains in the same space.
“You need a distinct centroid,” Stouffer said.
A clear, separate position in meaning space reduces competition and increases the likelihood of retrieval.
SEO becomes a control loop
This is not a one-time adjustment.
Every piece of content shifts the centroid.
That requires an ongoing process of measurement and adjustment, Stouffer said.
Teams need to monitor alignment continuously and correct drift as it occurs.
Over time, this creates a more stable system where new content reinforces the existing structure.
The visibility problem is really an observability problem
Most teams can’t see how their content exists in this system.
They can’t see clusters, centroids, or distances — or why content is excluded.
So they rely on trial and error, Stouffer said.
They publish, optimize, and wait for results. When nothing changes, they try something else.
Without visibility into the system, they react to outcomes rather than understanding causes.
Is AI seeing the brand you think you’ve built?
Your brand already exists as a mathematical object inside AI systems, Stouffer said.
You do not get to choose that.
You only choose whether to measure and control it or let it drift.
AI does not see your brand the way you describe it. It sees the aggregate meaning of your content.
“If you control your centroid, you control your visibility,” Stouffer said.
For more than two decades (nearly as long as I’ve been in SEO), backlinks have been core to SEO. Google’s PageRank changed search by using backlinks as a proxy for trust.
A link wasn’t just a pathway; it was a vote. The more votes you had and the more authoritative the voters were, the higher you ranked.
But as Google and AI systems matured, entity-based understanding emerged. AI models became better at understanding content, context, and credibility without always needing a hyperlink as a crutch.
Today, visibility isn’t driven solely by links. It’s strengthened by the broader signals your brand has earned: how often it’s mentioned, cited, and trusted across authoritative sources.
Search engines and AI platforms now prioritize these signals.
AI’s role in reducing reliance on links alone
Modern AI systems can evaluate trust and expertise in ways that were impossible a decade ago. AI has changed how authority, trust, and expertise are measured. It can now assess authority through signals once approximated mainly by backlinks.
AI can:
Identify entities and map their relationships across the web.
Interpret sentiment and contextual relevance.
Detect manufactured link patterns with near-perfect accuracy.
Understand brand prominence without a single hyperlink.
Evaluate reputation signals from reviews, mentions, and citations.
Cross-reference information across multimodal sources.
A brand mention in a reputable publication—even without a link—reinforces entity authority. Consistent expert citations validate expertise. These signals can’t be faked.
The result is a new era where links still matter, but they’re no longer the only star. Authority is now a network of signals.
The rise of entity‑first SEO
As Google relies less on raw link signals, something else has increased: entities — the people, brands, organizations, and concepts behind the content. Google increasingly showcases brands based on who they are and how they’re discussed across the web, alongside their backlink profile.
At its core, entity-first SEO means Google and LLMs are mapping relationships: identifying brands, understanding what they’re known for, and evaluating how they’re referenced in trusted sources.
For example, an outdoor gear company with a modest backlink profile began appearing in AI Overviews for “best hiking backpacks” after repeated mentions in Reddit threads, YouTube reviews, and a few expert roundups. Only some mentions included links, but the brand appeared consistently in trusted, topic-relevant conversations. Google interpreted those unlinked mentions as proof of real-world relevance.
If your brand consistently appears in a positive light in topic-related conversations, AI sees that as proof you’re relevant and trusted. The brands that win now have the strongest entity presence.
PR‑style links + editorial = off-page powerhouse
PR-style links and editorial coverage are earned mentions in reputable publications — the kind that signal real-world authority, not algorithmic manipulation.
Why editorially earned links outperform volume-based link building
Old-school, volume-based link building is less effective as AI improves at detecting manufactured patterns. But high-quality, relevance-driven link building—especially when paired with PR signals—is more valuable than ever.
Editorial PR links from journalists, analysts, and industry voices who choose to reference a brand because it’s newsworthy or authoritative reflect genuine credibility. They’re the digital equivalent of a trusted expert saying, “This brand matters.”
Authority-Based Link Building
Volume-Based Link Building
Strong editorial context
Thin or generic content
High topical relevance
Limited relevance
Natural language anchors
Over‑optimized anchors
Trusted authors and publications
Sites with weak editorial oversight
Clear entity associations
Obvious link‑selling footprints
AI doesn’t just look at the presence of a link; it evaluates the context around it. Models are trained to reward authenticity. Search aims to reward the most authoritative entities.
Creating multi‑signal authority
The real power comes from a combination of signals. As search has evolved, quality has become more powerful than quantity.
Now AI is driving another shift. You can grow traditional, relevance-focused links alongside new brand signals.
Secondary coverage as other sites pick up the story.
This is multi-signal authority — holistic credibility that AI systems are designed to reward. It tells Google and LLMs: you’re known, trusted, and relevant. You need to be part of the conversation.
As powerful as PR signals are, they’re only one part of a larger authority ecosystem. AI evaluates brands through a multi-signal trust profile that determines visibility.
Breaking down the new authority stack
Authority is now defined by the breadth and consistency of signals that validate who your brand is across the web. It’s evaluated as humans do: reputation, recognition, expertise, and prominence.
Authority is no longer a single metric tied to links. It’s a network of signals, including:
Brand strength: Rising branded search volume, navigational queries, and direct traffic patterns that signal real-world recognition.
Entity validation: Consistent NAP details, schema markup, and unified profiles help confirm your brand and connect references back to the same entity.
Topical authority: Depth of content, subject-matter experts, and external collaboration to show your brand is genuinely knowledgeable about the topics you discuss.
Reputation signals: Reviews, citations, third-party mentions, and sentiment patterns that reflect trustworthiness.
PR signals: News coverage, interviews, podcast appearances, and industry mentions that reinforce your brand’s relevance.
Together, these signals create a holistic authority profile that AI can interpret. The brands that win have the strongest multi-signal authority footprint.
Brand strength is the silent factor
Brand strength quietly outweighs other signals. The data shows it: brands in the top 25% for web mentions average 169 AI Overview citations, while the next quartile averages just 14.
That’s not a small gap.
This aligns with Ahrefs’ analysis of ~75,000 brands. The strongest correlations with appearing in AI Overviews were branded web mentions, branded anchors, and branded search volume—all signals of real-world brand presence.
Consider two competing fitness apps. One has thousands of backlinks from generic listicles. The other is frequently mentioned in Reddit threads, YouTube reviews, and TikTok “day in the life” videos. The second app appears consistently in AI Overviews because AI sees it as part of the real-world fitness conversation, not just the link graph.
The brands dominating AI Overviews have the strongest brand presence, supported by consistent links, mentions, citations, and contextual relevance.
Predictions for 2027 and beyond
By 2027, link building will undergo radical change. The shift from a numbers game to a confidence game will become the norm, and Share of Authority or Voice will be the new metric.
Here are my top three predictions for what’s next.
Prediction 1: Visibility will be measured by a “Share of Model” metric. AI rewards signal density, not link density.
Link building will expand to include “seeding” information in AI training hubs. Instead of mass outreach to low-tier blogs, strategies will target user-preferred sources like Reddit, LinkedIn, Substack, and GitHub, which LLMs use for high-quality, human-led data.
Brands that appear most often in training data, trusted sources, and high-authority conversations will earn visibility. This is the next step in a world where signals determine authority.
Traditional Metric
Predicted Metric
Why the Change
Backlink Count
Entity Citation Frequency
AI values brand mentions as much as links
Domain Authority (DA)
Source Reliability Score
Focus on the trustworthiness of the source
Anchor Text
Semantic Context
AI reads the intent around the link, not just the text
PageRank
Share of Model (SoM)
Success is being the AI’s preferred answer
Prediction 2: Brands will act as primary newsrooms as proprietary data generates the strongest authority signals.
As AI systems rely more on multi-signal authority, proprietary data becomes one of the most powerful assets a brand can produce. Data isn’t just content — it’s a signal engine. It naturally earns the signals AI trusts most:
PR coverage.
Citations.
Mentions.
Social discussion.
Co‑occurrence with authoritative entities.
Long‑tail references in future content.
Traditional link building still provides foundational authority, but data-driven assets are the accelerant. They create high-trust, high-context signals that AI models weigh heavily.
On a platform where visibility depends on how often your brand appears in authoritative contexts, proprietary data is the most scalable way to increase your Share of Authority.
Prediction 3: Unlinked brand mentions will become one of the most valuable authority signals
Traditional contextual links will continue to build the foundation. But beyond that, search engines will track every time your brand appears alongside specific topics. Links will need “semantic context.”
Every mention of your brand in news, podcasts, reviews, forums, social posts, and roundups becomes a signal that strengthens your entity.
AI isn’t replacing link building — it’s expanding it
The future of off-page SEO isn’t a battle between traditional link building and AI-driven signals. It’s the realization that links were always just one signal. Now search engines can understand dozens more.
Traditional link building still matters. It provides the foundational authority, crawl paths, and topical relevance every site needs.
AI has widened the field. It can read context, interpret sentiment, understand entities, and evaluate brand presence.
These signals don’t replace links — they amplify them.
Ask any paid search manager who has tried to get an AI agent to do something genuinely useful with a Google Ads account and you will hear a version of the same story. They exported performance data, pasted it into a chat window, got a solid answer, and then did the exact same thing the next day.
Exporting, pasting, repeating — that isn’t automation. That’s the same manual work you were doing before, performed in a different window.
The AI tools are not the problem. Any of the major ones can do solid analysis when the right data is in front of them.
The problem is getting that data to them live, current, and without a human in the middle copying it across. It’s the reason most PPC accounts in 2026 still run almost exactly the way they did before anyone started talking about agents. Call it the data wall.
The problem hiding behind “we just need better prompts”
Every ad platform is a silo by default. Google Ads records a conversion. Your CRM records whether that lead is qualified. Your inventory system records whether the product behind that click is still on the shelf. None of them talk to each other without deliberate plumbing.
PPC managers have bridged that gap manually for years: weekly exports, cross-referenced spreadsheets, dashboards that were stale by Monday morning.
That was workable when a human was doing the bridging on a set schedule. It becomes a structural problem the moment you hand execution over to an agent that must act in real time.
Take a keyword showing healthy volume, an acceptable CPA, and a CVR in range — all according to Google Ads. In HubSpot, those same conversions are tagged as disqualified leads: wrong territory, no budget, wrong company size entirely. The agent has no way to know. It keeps bidding. The budget keeps spending. And the problem doesn’t surface until someone runs the monthly review.
That is a data access problem, not a prompting problem. Better prompts don’t fix it. But a better pipeline does.
MCP gives your AI agent access to data and skills
The Model Context Protocol (MCP) is an open standard that lets AI clients connect to external tools and data sources without a custom integration for each one. Before MCP, getting an agent to read from Google Ads, your CRM, and an inventory system meant building and maintaining three separate connectors, with the burden compounding every time you added a source.
MCP standardizes the handshake. A platform publishes an MCP server once, and any compatible AI client — Claude, ChatGPT’s agent mode, your team’s custom agent — can connect to it.
Google has already open-sourced its Ads API MCP server on GitHub, which allows agents to run Google Ads Query Language (GAQL) queries directly against live account data. The infrastructure problem that has blocked most real-world agentic PPC work is finally being addressed at the platform level.
What opens up when data finally flows
The CRM gap closes first. An agent connected to both Google Ads and HubSpot can pull last month’s conversions, cross-reference them against CRM disposition, identify the keywords producing disqualified leads, and lower bids on those sources — on a schedule, without a human compiling the report. A loop that used to swallow half a day runs automatically.
Inventory creates the same kind of blind spot. An agent connected to Shopify can check stock levels before weekend campaigns go live. When an SKU drops below the threshold, the corresponding product group is paused before traffic hits a page that no longer converts.
Even the data-pipeline work itself gets faster.
On a recent “PPC Town Hall“ episode, Lars Maat — a PPC expert and agency founder in Rotterdam — described building a Python pipeline with no prior Python experience, connecting the Google Maps API, Google’s Things To Do feature, and Ahrefs to generate optimized landing pages for a parking client to identify nearby attractions, check search volumes, and feed the content to a generator.
The whole thing was live in two weeks. The only constraint was getting the right data in front of the AI and not what it could do.
Access without guardrails is its own problem
Here’s where things get interesting, and where most of the MCP hype is skating past a real issue.
Write access to a live Google Ads account, in the hands of a probabilistic language model, without institutional constraints, is a new category of risk. An agent that can pause a campaign needs defined parameters: what threshold triggers the action, who gets notified before it fires, which campaign types require human sign-off. Those parameters don’t exist inside the AI tool. They have to be built around it.
Advertisers can grant granular permissions to the Optmyzr MCP to stay in control of what the connector is allowed to do on its own, what it can never do, and what it can do with human approval.
Advertisers can grant granular permissions to the Optmyzr MCP to stay in control of what the connector is allowed to do on its own, what it can never do, and what it can do with human approval.
On another “PPC Town Hall“ episode, Ann Stanley — founder of Anicca Digital and one of the UK’s most experienced paid media practitioners — described effective AI deployment as a sandwich: humans at the front who understand the goal and can give precise instructions, humans at the back who review the output and decide what ships, and AI handling execution in the middle. The quality of what comes out depends on the quality of what goes in and on whether the middle layer has any constraints at all.
This is where raw API access stops being enough.
Google’s open-source MCP server is a good piece of infrastructure. But it is not a safety net. It will happily run any GAQL query and any mutation the agent constructs, and if the agent hallucinates a campaign ID or picks the wrong lookback window, the ad account absorbs the consequences.
LLMs are probabilistic. Ad platform APIs are not. So, something has to sit in between.
Why Optmyzr built its own MCP
We have spent over a decade encoding how Google Ads actually behaves — not just what the API exposes, but the interdependencies between settings, the edge cases around campaign types, the nuances of what makes a “duplicate keyword” a true duplicate versus a false positive. That work lives inside Optmyzr as a business intelligence layer. Our MCP connector is how we let your AI agent borrow it.
When Claude, ChatGPT, or your team’s custom agent connects to the Optmyzr MCP, it gains access to the same Sidekick capabilities your team uses inside Optmyzr: pulling PPC performance reports with rich filtering and segmentation, surfacing configured and triggered alerts, creating and editing alerts, retrieving merchant feed details, summarizing portfolio health across every active account, and — this is the one most people miss — generating and executing a full Rule Engine strategy from a plain-English description of what you’re trying to accomplish.
That matters for three reasons most DIY setups miss:
Strategy from a sentence, executed inside Optmyzr. The MCP’s Rule Engine function takes a natural-language instruction (“find campaigns where CPA has drifted 20% above target over the last 14 days and draft a bid-adjustment strategy”), generates the corresponding Rule Engine strategy, runs it against your account, analyzes the results, and returns recommendations. The LLM writes the intent. Optmyzr’s deterministic Rule Engine does the work. That is the execution and control layer that raw ad-platform MCPs don’t have.
Cross-account, portfolio-scale analysis. Sidekick, inside the Optmyzr UI, is brilliant at single-account, single-page context. The MCP is where you go when the question is “which of my 80 accounts has negative-keyword waste trending upward this month?” An AI client connected to the Optmyzr MCP can fan out across every account on your profile in a single prompt. This is the single biggest reason agencies plug their agents into the Optmyzr MCP rather than a raw Ads API connection.
Guardrails inherited from Sidekick. Every action taken through the Optmyzr MCP runs under the same permissions and workflow logic as using Sidekick directly. The agent analyzes, strategizes, alerts, and composes proposed changes; humans or existing Optmyzr approval flows ship the changes. That is the “safety sandwich” Stanley described, baked into the product rather than bolted on.
The end result is an AI agent that operates across your portfolio with the reach of an API, the judgment of a platform that has been in this space since before AI agents were a category, and a safety posture that doesn’t require you to build your own circuit breakers.
A practical starting point
If you want to experiment with read-only access across raw ad platforms, Windsor.ai and Zapier’s MCP integration are the fastest on-ramps. If you’re comfortable managing your own guardrails, Google’s open-source Ads API MCP server on GitHub gives you precise GAQL control at the cost of building the safety layer yourself.
If you run client accounts where a misfire is unaffordable — or you just want your AI agent to think across your whole portfolio with the judgment of a senior PPC strategist — the Optmyzr MCP is the fastest path to an agent that is actually safe to give the keys to. It works with Claude Desktop (via custom Connectors or manual config), Claude Code, ChatGPT (via Developer Mode apps), and any MCP-compatible client. And, you can set it up in minutes: generate an API key from the MCP Integration panel in your Optmyzr settings, paste the server URL into your AI client, and your agent is operating across every active account on your Optmyzr profile.
In November 2024, with SE Ranking’s research team, we began a 16-month experiment to test how AI-generated content performs in organic search. We launched 20 websites across different niches and tracked their performance over time.
But we didn’t stop there.
We wanted to look beyond rankings and understand how AI systems discover, interpret, and cite information. So we expanded the project into a more ambitious set of experiments on AI search and LLM visibility.
For the next phase, we created a new fictional brand in a real niche with real competition to see how quickly AI systems would pick it up and whether it could be cited alongside or above trusted industry leaders and government sources.
After the first month, several patterns became clear.
Methodology behind the experiment
We created a fictional brand and published content about it across:
Brand new website representing the brand, registered specifically for the experiment.
11 additional domains, all over a year old, with prior history and existing rankings.
Across these sites, we tested seven content formats:
Deep guides.
“Alternatives” listicles.
“Best of” listicles.
Review articles.
Comparison (“vs”) pages.
How-to/tutorial content.
Clickbait-style articles.
We started publishing in March 2026 and tracked how five AI systems responded: ChatGPT, Google’s AI Overviews, Google’s AI Mode, Perplexity, and Gemini.
In total, we tracked 825 prompts across different query types and scenarios, which generated 15,835 AI answers during the first month.
For each prompt, we looked at three things:
Whether our brand (or one of our sites) appeared in the AI answer
Whether it was cited as a source
How often it appeared as the main cited source (position 1)
This experiment is still ongoing, and the first month was designed to see how AI systems respond to newly created, fully available information tied to a fictional brand.
Key experiment insights
96% of all AI visibility for our fake brand came from branded searches. Even in a real niche with relatively low competition, a completely new domain had little chance of competing with established brands for broader, non-branded topics.
On queries that only our fake brand could realistically answer, we outperformed established competitors (DT 40+) by as much as 32x and achieved near-exclusive visibility in less than 30 days.
Even without strong authority, the pages that clearly explained who we were, what we offered, and how we were different (e.g., “[Brand Name] Compete Guide” and “About Us”) became the most cited sources from the main domain. This shows that brand positioning can be shaped early in AI search.
Perplexity was the fastest engine to surface new content. Newly published pages usually reached position #1 within 1–3 days of indexation. However, Perplexity often cited additional domains instead of the main brand site.
Google’s AI Mode was the most stable for branded queries tied to unique claims (showing our brand at #1 for an average of 90% of prompts).
Gemini, by contrast, often misidentified the brand. And even for uniquely branded queries, this AI platform provided 60% of AI answers with no citations to our brand.
Deep guides, review articles, and comparison pages generated the highest number of AI citations, while more generic formats like how-to articles and listicles showed minimal impact.
A topical silo made up of one hub page and 10 supporting articles generated no AI citations. Meanwhile, a set of 30 short, repetitive pages (500-750 words each) generated more than 1,800 citations. So, in this test, high-volume content publishing mattered more than internal linking.
Insight 1: New domains may not beat market leaders right away, but they can define their brand narrative in AI search
One of the clearest takeaways from the first month is that a brand-new site has limited chances of competing for broader, non-branded topics, even in a niche with relatively low competition.
AI systems did pick up our fictional brand quickly, but most of that visibility came when the query was already connected to the brand itself, whether through:
the brand name
product-specific claims
or other brand-related angles
Specifically, out of all AI answers, 96% (15,553 out of 15,835) came from branded searches.
Non-branded informational queries produced just 4% of AI answers in total, and even those mostly came through our supporting test domains.
The pattern was even stronger on the main fictional brand site itself. There, we recorded:
10,253 AI answers for branded queries
and just 6 for non-branded ones
That is a 1,700x difference.
This feels familiar because it mirrors classic SEO. New brands still need time to earn trust, build recognition, and compete for broader topics. When AI systems answer general industry questions, they tend to rely on established and authoritative sources.
This is why the strongest results in our experiment came from prompts tied to information only our brand could answer, such as how the product works, how often it updates, and so on.
These queries alone generated 11,430 AI answers with citations to our brand, accounting for 72% of allvisibility in the experiment.
The reason is simple: there is no competition.
If a query is something like “Was [Brand Name] originally built as an internal tool?”, only one source can realistically answer it. AI systems don’t need to compare sources, evaluate authority, or resolve conflicts.
That gave our fictional brand a major advantage. Even with no domain authority, it outperformed established competitors (DT 40+) by up to 32x on these queries.
What all this means for marketers and business owners is that when users ask about your brand, AI systems are likely to rely on your website as one of the main sources of information. So, the content they cite should be fully aligned with how you want your brand to be positioned.
Our experiment supports this. The “Complete Guide” page on the main site appeared in 1,799 AI answers (the highest result in the dataset) largely because it consolidated key brand information in one place. The “About Us” page followed with 1,500 AI answers. Together, these were the most cited URLs from our main domain, with LLMs relying on them 3–5 times more often than the additional domains.
In practice, AI systems may learn about your brand quickly, but what they learn depends on what you publish. Your core pages should clearly answer all the questions that are important for your brand: who you are, what you offer, and how you’re different.
This way, you can start shaping your narrative in LLMs even as a new or small brand, before you have the authority to compete for broader industry topics.
Insight 2: AI engines behave very differently
Another strong pattern in the experiment is that the five AI systems do not behave alike. They vary not just in how often they mention the fictional brand, but in how quickly they pick it up, how consistently they cite it, and which domains they prefer as sources.
Google’s AI Mode: The most stable for branded visibility
Google AI Mode was the most reliable engine in the dataset.
Throughout the experiment, it placed our domain in position 1 for branded queries in about 90% of cases. Unlike other engines, it did not show major fluctuations or dependency on other test domains.
If there was one place where direct brand visibility was predictable, this was it.
Google’s AI Overviews: High visibility, lower consistency
Google’s AI Overviews also surfaced our tested domain for branded queries, but the pattern was less consistent.
We saw our brand appear in position 1 for 14 days for some prompts, followed by a drop mid-month that didn’t recover. More broadly, mentions and links for branded queries fluctuated heavily, appearing and disappearing multiple times each week.
Yet when links were included, it accurately described the brand. When no links were shown, it often claimed there was no public information available.
The takeaway here is not that AI Overviews failed to recognize the brand. It did. But that visibility was harder to sustain over time.
Perplexity: The fastest to pick up new content, but not always brand-first
Perplexity was the breakout engine for fresh content.
It picked up newly indexed pages within 1–3 days, which clearly made it the primary driver of early visibility within our experiment.
But this speed comes with a tradeoff.
Instead of consistently citing pages from our main domain, Perplexity often used our supporting test domains as sources.
In early March, our main brand held position 1. But as we published more content on supporting domains, those domains gradually replaced it in AI citations.
By the end of the month,six different domains were being cited: our main brand site and five supporting test domains where we had published additional content about the fake brand.
So while Perplexity increases overall visibility, it doesn’t always send that visibility directly to the main brand site.
ChatGPT: Slower to react, stronger over time
ChatGPT showed the most noticeable progression over time.
At the beginning of March, there were no links or mentions of our brand at all. But as the month progressed, visibility steadily increased.
This growth was especially clear across specific content types:
Unique claims drove the strongest performance, accounting for the majority of visibility, with around 70% of citations appearing in position 1.
Review articles started with zero presence but quickly gained traction, reaching consistent position 1 rankings by March 17.
Comparison (“vs”) articles achieved the highest consistency overall, with mentions on 29 out of 31 days by the end of the month.
Overall, ChatGPT didn’t immediately recognize the brand. Once it recognized the brand, ChatGPT began surfacing it frequently, especially for branded prompts.
Gemini: weakest performance and most inconsistent behavior
Gemini was the weakest engine in the dataset and the least consistent.
Initially, it struggled to identify our niche correctly. However, the results improved when we changed how we asked the questions. When prompts were framed as comparisons (“X vs Y”) or reviews, Gemini was much more likely to recognize the brand correctly.
Even then, the results were still limited. In the best-performing scenario (queries based on unique claims about the brand), Gemini failed to include any citations to our brand in about 60% of responses.
Insight 3: Content format matters, but so does the volume
Next, for this experiment, we tested seven different content types across both our main site and supporting test sites.
And what we found is that comprehensive, in-depth content earns far more AI citations than shorter articles.
The strongest-performing formats were:
Deep guides (5,000–6,000 words): ~900 AI answers per page
Review articles: ~257 AI answers per page
Comparison (“vs”) articles: ~145 AI answers per page
This does not mean there is one ideal content length or that longer pages automatically perform better. The stronger results likely came from the depth, structure, and completeness of the information these formats provided.
This finding also aligns with our broader research, where we’ve seen that detailed, well-structured content performs better across platforms like AI Mode and ChatGPT.
Pages with narrower or less comprehensive coverage generated fewer citations overall. For example:
How-to articles/tutorials: 22 AI answers per page
Clickbait/skeptical articles: 19
“Best of” listicles: 11
“Alternatives” listicles: 4
As part of the experiment, we also tested a “spam” approach: publishing 30 thin pages (500–750 words each) on one of our test domains.
Individually, these pages were weak (averaging just 63 AI answers per page).
But together, they generated 1,897 total AI answers, which makes it the highest-performing content setup at the domain level.
However, thin content is not inherently “better” because of this result. It just shows that volume can sometimes compensate for quality by increasing the likelihood of retrieval and citation (especially in AI engines like Perplexity that prioritize freshness).
In simple terms, a few strong pages win on quality, but a large number of weaker pages can still win on overall exposure.
Insight 4: Topical clustering alone doesn’t produce AI visibility
One of the most useful negative findings came from the content structure test.
For this part of the experiment, we created a hub page on one of our test domains and linked it to 10 supporting articles. In theory, this setup should have built strong topical depth and semantic reinforcement. All 11 pages were indexed, properly structured, and internally linked.
Yet, they generated zero AI citations.
This is significant because it challenges a common assumption carried over from traditional SEO: that topical clustering automatically improves authority or increases the likelihood of being retrieved.
At least in this experiment, it did not.
That does not mean topic clusters are useless. It means they are not sufficient alone. Internal linking and semantic breadth may help a search engine understand a site, but AI systems still need a reason to retrieve and cite a specific page for a specific answer.
So, do AI engines reward entity coherence more than truth verification?
Even within just one month, the results point to a clear conclusion:
AI systems appear to respond more strongly to consistency, repetition, and availability than to strict verification.
That should not be overstated. It is not that LLMs “believe anything.” But if a claim is:
Structured clearly
Repeated across relevant pages
Phrased like a fact
Available in retrievable source environments
Then AI systems may surface it surprisingly easily.
We also saw this in manual checks of LLM responses in AI Results Tracker. For prompts such as “is [brand] worth it,” some systems responded positively and recommended using our completely unknown fictional brand.
It may not be because LLMs automatically favor every new brand. In some cases, when little or no negative information exists, a system may fill the gap with a neutral or positive-sounding response based on the limited signals available.
But the result is the same: if a completely fictional brand can generate consistent citations and favorable recommendations under certain conditions, then brand narratives in AI search may be more flexible than they seem.
Final thoughts
The most important outcome of this experiment isn’t that a fictional brand achieved visibility.
It’s that visibility followed a repeatable pattern once specific inputs were introduced: branded context, unique claims, diverse content formats, and sufficient presence across different sources.
That leads to two important conclusions.
AI search is not random. It follows identifiable signals, and those signals can be studied, tested, and influenced.
AI is still highly sensitive to manipulation. AIs don’t have their own sense of truth, verification processes, or critical thinking. The same factors that help legitimate brands become visible can also be used to simulate credibility.
If there’s one lesson here, it’s that you can’t assume AI systems will accurately represent your company, product, or category by default.
You have to actively shape the information environment they rely on.
And this is only the first month of results. We’re continuing to collect data, expand the experiment, and monitor how these patterns change over time.
Automation doesn’t fail on its own — it does exactly what it’s trained to do. The problem is that when Google Ads is fed incomplete, misaligned, or overly broad signals, it can optimize toward the wrong outcome faster than most advertisers realize.
In our second installment of SMX Now, our new monthly series, Ameet Khabra of Hop Skip Media will break down a real account where a 417% jump in conversions turned out to be the wrong kind of success. She’ll use that case study to explain the four key ways automation drift enters an account: signal drift, query drift, inventory drift, and creative drift.
You’ll leave with a practical framework for diagnosing drift early, understanding where human oversight matters most, and managing automation more deliberately so it works toward real business goals — not just platform-reported wins.
SEO sits at an interesting crossroads. One camp insists on optimizing for large language models (LLMs) and AI engines, and the other insists on doing SEO the same way we’ve always done it.
But there’s another way to approach it: combining the fundamentals of SEO with an understanding of how LLMs operate and why.
With this approach, you can keep what’s always worked — like on-page SEO and backlinks from reputable sources. Yet you can also look ahead to new tactics, such as optimizing for query fan-out and emerging prompt intents.
Since 2023, and the rise of tools like ChatGPT, Gemini, Claude, and Perplexity, I’ve been researching how AI engines display search results and where SEO is headed.
Here’s what I’ve found, and how you can use it to rethink your approach to a future where AI SEO considers human behavior at its core.
How the Red Queen theory applies to AI search
The Red Queen evolutionary model says that for everything to stay the same over time, everything must change. But as you adapt to the changing environment, so does the competition.
As a result, you and your competitors remain the same distance apart. In your attempt to become the predator, your prey adapts in equal measure, leaving the status quo firmly in place.
Essentially, if you don’t adapt, you’ll get eaten.
How to apply the Red Queen principle to your AI SEO strategy
Along the same lines, AI search is a natural progression of what has existed for at least a decade. A hybrid search model has been in place since 2015, with the introduction of RankBrain.
That’s why many of the same SEO tactics still work now. Instead of a fundamental change, a series of big and small shifts has taken place over time.
For example:
LLMs still use retrieval-based search engines.
Content quality and freshness still matter.
Site speed remains crucial for performance.
Intent matching across the major categories is still relevant.
“Optimize for search engines (so retrieval-based AI can cite you) + earn third-party coverage (so the model already knows you before the prompt is typed).”
So, what makes a worthy source for LLMs? What are people using AI assistants to accomplish? Is it to find information, analyze an issue, or create a list of recommendations?
Research from Moz shows that only 12% of AI Mode citations mirror the URLs in organic results. This means AI engines only somewhat follow the traditional rules of SEO. And over time, these changes will likely become more extensive.
As a result, your short- and long-term strategies must work together to remain innovative yet grounded.
Focusing on human behavior and traditional search while working to understand LLMs is how you worship the Red Queen.
Why RAG is essential to understanding AI search
The most effective approach is focusing on where LLMs fall short: their limited databases. Their systems rely on retrieval-augmented generation (RAG) to address gaps in their databases without requiring constant retraining.
AI assistants like Google AI Mode and Gemini need RAG to prevent hallucinations and to continue surfacing relevant answers for consumers.
Here, I gave Google AI Mode and ChatGPT the same prompt:
“I am looking for a skincare routine that prioritizes anti-aging. What routines and products should I use?”
Both returned relevant results, but the specifics differed. Google AI mode returned anti-aging tips and routines, while ChatGPT sourced anti-aging products.
They also used different sources for their information. Where ChatGPT preferred a fresh Today.com source, Google referenced dermatology websites and even Google Shopping listings.
In both instances, the AI assistants needed external sources.
How to optimize for AI search vs. traditional search
For SEO, you need to understand how your content aligns with the limitations of AI engines. They do the searching for themselves and then generate a response for the user, only showing external sources some of the time.
It’s a subtle shift in thinking. Optimizing for search is less about crafting SEO content and more about becoming a trusted supplier for these LLMs — so when people enter a prompt, your brand shows up in the answer.
In that way, the Red Queen evolution involves studying AI answers, learning their quirks, comparing their preferences, and evaluating their most common intents.
Then, you can feed the database. Make sure Google, which has the largest database of any LLM, has sufficient data to keep you in the pool of trusted sources.
Without people, AI assistants have no power. That’s why you have to put people first.
Where are people using AI assistants to create, achieve, build, search, and prompt? And where does it make sense for your brand to be?
Now that the AI search landscape is more competitive, you have to think like a social media professional or a traditional marketer.
A short-term SEO strategy can work now, in the overlap between traditional and AI search. It uses topical authority to deliver results immediately, shortening clients’ time to success. Here’s the short-term plan.
“Today, internal links aren’t just distributing authority. They’re defining the semantic structure of your site.”
Internal links help search engines understand your site’s overall structure. AI Mode, for example, is built with vector search models, and entities are crucial to their operation.
Vector search puts your website’s information into a 3D model, allowing algorithms to go beyond keywords and determine the intent behind someone’s search. Internal links help strengthen these signals.
“We should link internally and externally to content that reinforces entity connections, because this helps LLMs map embeddings to a wider network of connected entities, hence increasing our authority in the knowledge graph.”
Links have long mattered for search, and they still do. As you develop your long-term SEO strategy, they become increasingly important for surfacing your content in LLMs and AI assistants.
Think in terms of topical coverage versus keyword research
Plan your topical authority through these four lenses:
Topical coverage: Develop pages that cover the overall topic and its subtopics in a relevant, useful way.
Query fan-out: Study the query fan-out behavior for your most valuable search terms to identify gaps in your website content.
Intent: Be ruthless in determining intent by breaking down the categories in your niche that do or don’t have AI visibility potential.
Content quality: Make sure your content follows strong experience, expertise, authority, and trust (E-E-A-T principles) and is optimized for AI SEO.
These are all based on traditional SEO tactics. However, they consider a hybrid or LLM-based approach versus focusing solely on organic search.
Optimize and maintain your site’s technical health
Technical health is rooted in what works for search now: site speed, schema markup, and optimized titles and descriptions.
After all, LLMs are expensive to maintain and run. It’s in their best interest to use resources that are fast and easy to extract information from.
Consider recent site speed findings from Mike King, who notes, “Slow responses can trigger 499 errors, where the AI stops waiting.”
These three short-term goals — topical coverage, internal links, and technical health — are all important for visibility in LLMs and AI engines.
But search has evolved because human behavior has changed. So, the long-term play involves adapting to human behavior.
The long-term future of SEO relies on human behavior
Long-term SEO strategies should focus on the intent and actions of human behavior surrounding AI.
Identify search intent
The four traditional search intents (informational, navigational, commercial, and transactional) are still relevant. But AI search has added a few more.
According to MIT, examples include zero-shot, instructional, and contextual prompts. Grammarly considers other intents, including educational, opinion-based, and problem-solving.
I tend to break down intent into multiple categories of SEO opportunity based on the clients I’m working with. Some common examples include directional, recommendation, local, booking, and shopping.
Consider query fan-out
Once you identify the most relevant search intents, you can hypothesize what people are looking for the generative engine to do. From there, you can do one of two things:
Rule a subset of topics out of your strategy. For example, if you don’t have a local business but the results have local intent, you don’t need to focus on those topics.
Create web pages optimized for LLMs. For example, you can break down a topical category, study its query fan-out results, and reverse engineer what answer engines find valuable based on their behavior.
Say your target customers are U.S. home buyers. They want to know: “Is now a good time to buy a house?”
Plug the prompt into an AI engine and study the AI-generated answer. In AI Mode, for example, you can infer that Google fans out across multiple topics, including market conditions and pros and cons.
ChatGPT, in contrast, looks at trends, forecasts, and seasonality.
Based on the data, develop a content strategy that supports query fan-out behavior.
“By ‘fanning out’ the original query, the system can explore various facets and subtopics simultaneously based on semantic understanding, user behavior patterns, and logical information architecture around the topic, leading to a more complete and contextually rich understanding of the user’s need.”
For example, you can break down the complexities of buyer’s markets, buyer and seller perspectives, or the changes in rising inventories. You could even build a useful tool around mortgage rates or national home price trends.
I use a variety of tools to help with analyzing query fan-out. But the most popular options include Semrush, Ahrefs, and Profound.
Prompting may not even be a concern in the future if AI assistants become more sophisticated at solving problems rather than responding to prompts.
Instead, AI engines may be able to anticipate searchers’ needs and intentions, according to Harvard Business Review. That means it may be increasingly helpful to focus less on prompts and more on problems.
In the absence of keyword research, it will be more important than ever to analyze human behavior, evaluating and pivoting based on how people use AI assistants.
It’s helpful to consider how social media professionals and brand experts think creatively about where their audiences are and how to attract attention while building brand power and recognition.
For example, Rare Beauty and Rhode have both grown their brands with creativity and consumer listening, especially in the last six years.
They’ve put considerable effort into brand campaigns, public relations (PR) campaigns, TikTok content, and in real-life (IRL) experiences that have gone viral globally.
Looking at ChatGPT, the first product recommended for “best makeup gifts for Gen Z” is Rare Beauty.
Google makes similar recommendations, with Rare Beauty and Rhode leading the list. The results are influenced by PR coverage and social media virality.
SEO’s role in the future of search
SEO will have a future as long as there are search engines with AI experiences. While it might look like SEO has become the prey, it’s evolved just as much as the predator has.
Internal linking is one of the most controllable levers in technical SEO. But when tracking parameters are embedded in internal URLs, they introduce inefficiencies across crawling and indexing, analytics, site speed, and even AI retrieval.
At scale, this isn’t just a “best practice” issue. It becomes a systemic problem affecting crawl budget, data integrity, and performance.
Here’s how to build a case study for your stakeholders to show the side effects of nuking tracking parameters in internal links — and propose a win-win fix for all digital teams.
How tracking parameters waste crawl budget
Crawl budget is often misunderstood. What matters isn’t the volume of crawl requests, but how efficiently Google discovers and prioritizes valuable pages.
Crawl budget oversimplified
As Jes Scholz pointed out back in 2022, crawl efficacy indicates how quickly Googlebot reaches new or updated content. Inefficient signals, such as low-value or parameterized URLs, can dilute crawl demand and delay the discovery of important pages.
Tracking parameters like utm_, vlid, fbclid, or custom query strings work well for campaign tracking. But when applied to internal links, they force search engines to process additional URL variations, increasing crawl overhead.
Crawlers treat every parameterized URL as a unique address. This means:
Multiple versions of the same page are discovered.
Crawl paths become longer and more complex.
Resources are wasted processing duplicate content variants.
Search engines must still crawl first, then decide what to index.
How crawl budget feeds into the crawling and indexing pipeline
Tracking parameters can quickly escalate a single URL into many variations by combining different values, creating a large number of duplicate URLs. This leads to:
Redundant crawling of identical content.
Longer crawl paths (more “hops” before reaching key pages).
Reduced discovery efficiency for important URLs.
URLs with tracking parameters lost in the invisible long tail of a website.
On large websites, this becomes a critical issue. Googlebot has a limited number of crawl requests per website. Any time spent crawling parameterized URLs reduces the opportunity to crawl the most important pages, even the so-called “money pages.”
Crawl entries for URLs with tracking parameters via server logs
Granted, crawl budget is typically a source of concern for larger websites, but that doesn’t mean it shouldn’t be ignored on sites with 10,000+ pages. Optimizing for it often reveals more room for efficiency gain in how search engines discover your content.
Ironically, tracking parameters in internal links can corrupt the data they are meant to measure.
When a user lands on your site via organic search and then clicks an internal link with a tracking parameter, the session may break down and be reattributed.
Anecdotally, Google Analytics 4 resets a session based on campaign parameters, whereas Adobe Analytics does not.
This creates several downstream issues. Attribution becomes fragmented, especially under last-click models, where credit may shift away from organic entry points to internal interactions.
Attribution is fragmented across the same pair of URLs
As performance is split across URL variants, page-level SEO reporting becomes unreliable and creates a disconnect between organic SERP behavior and what actually happens when a prospect lands on your pages.
One of the most overlooked risks is backlink fragmentation. If internal links include tracking parameters, users may share those exact URLs. As a result, external backlinks may point to parameterized versions of your pages rather than the canonical ones.
This means authority is split across URL variants, some signals may be lost or diluted, and search engines may treat these links as lower value. Over time and in large proportions, this is set to weaken your backlink profile.
Backlink dilution on target URLs by allegedly authoritative domains
Nonetheless, it piggybacks on the above tracking problems. Those external backlinks carry internal UTM parameters into external environments. This permanently fractures session attribution and wastes crawling resources.
Why URL bloat slows pages and weakens AI access
Using UTM parameters in your internal links is more than just a crawl overhead. It also strains your caching system.
Each URL with parameters is essentially a different page with its own cache entry. That means the same content may be fetched and processed multiple times, increasing load on both servers and CDNs.
As the web is increasingly consumed by aggressive AI bots, having internal links with tracking parameters leaves traditional web crawlers and RAG-based systems wasting bandwidth on duplicate cache entries for pages that serve the same purpose.
At the same time, many of these systems rely heavily on cached versions and avoid rendering JavaScript due to architectural and cost constraints at scale.
This makes URL hygiene a foundational requirement, not just a technical preference.
On the cache front, Barry Pollard recently suggested a smart workaround that Google has been testing for a while.
Granted that removing those parameters results in identical content, helping the browser reuse a single cached response can dramatically improve Time to First Byte (TTFB), a metric that directly affects your Core Web Vitals.
Some CDNs already strip UTM parameters from their cache key, improving edge caching. However, browsers still see each parameterized URL as a separate asset and will request them one by one.
The No-Vary-Search response header closes this gap by aligning browser caching behavior with CDN logic. Implementing it allows browsers to treat URLs with specific query parameters as the same resource. Once set, the browser excludes the specified parameters during cache lookups, avoiding unnecessary network requests.
In practice, the header signals which parameters to ignore when determining cache identity. The only caveat is that it’s supported in Google Chrome +141, with support coming in version 144 on Android. If most of your organic traffic comes from Chromium-based browsers and you run paid campaigns, this is worth adding now.
The structural fix: Move tracking out of URLs and into the DOM
While canonicalization to the clean URL version isn’t a long-term solution, it remains the standard requirement. If you’re stuck in such a position, it’s likely a symptom of deeper architectural challenges at the intersection of SEO, IT, and tracking.
Either way, the preferred solution is to move measurement from the URL layer into the DOM layer.
This can be achieved successfully using a good old HTML workaround: data attributes.
This configuration allows tracking tools (e.g., tag managers) to capture click events and user interactions without altering the URL. Plus, it ensures internal links point to the canonical version without introducing duplicate cache entries.
Tracking parameters in internal links is a legacy workaround, often rooted in siloed teams and flawed site architecture.
However, they create downstream issues across the entire organization: wasted crawl budget, fragmented analytics, diluted backlink equity, and degraded web performance. They also interfere with how both search engines and AI systems access and interpret your content.
The solution isn’t to optimize these parameters, but to remove them entirely from internal linking and adopt a cleaner, more robust tracking approach.
Using a good old HTML trick sounds just about the right fix to win over traditional search engines, AI agents, and especially your stakeholders.
Note: The URL paths disclosed in the screenshots have been disguised for client confidentiality.
Ranking and visibility are no longer the same thing. For 20 years, SEO teams optimized for SERP position. Higher rankings meant more visibility, more clicks, and more traffic. That relationship is breaking down.
Earlier this year, Ahrefs found that only 38% of pages cited in Google AI Overviews also ranked in the traditional top 10. Eight months earlier, that number was 76%.
The implication is straightforward: being highly ranked no longer guarantees being seen.
In AI-generated answers, visibility is determined by inclusion — and by how your brand is represented when it appears. That representation is determined by a different set of signals.
How visibility works in AI search: 4 signals that matter
Four distinct patterns determine how brands appear inside AI-generated responses:
Mention order.
Depth of explanation.
Authority signals.
Comparative positioning.
1. Mention order
When an AI model lists three CRM options, the order matters. Up to 74% of users choose the AI’s top recommendation, according to a Growth Memo and Citation Labs AI Mode study.
This reinforces how heavily people rely on the first option presented.
About 26% of users overrode the AI’s order entirely when they recognized a brand they already knew. This is a shift from how users behave in traditional search. And 56% of users built their own shortlist from multiple sources. In AI Mode, 88% took the AI’s shortlist without checking further.
The AI’s curated answers carry that much weight. But mention order isn’t stable. SE Ranking’s August 2025 analysis found that when you run the same query three times, AI Mode only overlaps with itself 9.2% of the time.
The sources change. The order changes, sometimes dramatically.
The lesson: Mention order creates an advantage, but it isn’t deterministic. Brand recognition can trump position.
Not all mentions are created equal. Some brands get a single sentence. Others get a full paragraph explaining their strengths, use cases, and differentiators.
The difference comes down to how much citation-worthy information AI systems found about you.
When Semrush announced its AI Visibility Awards in December 2025, it analyzed more than 2,500 prompts run through ChatGPT and Google AI Mode. Category leaders like Samsung in consumer electronics didn’t just appear more often. They got more detailed descriptions when they did appear.
Challenger brands like Logitech in gaming accessories showed up, too, but typically with shorter mentions focused on a single differentiator.
The top 4.8% of URLs cited 10+ times by ChatGPT share a common trait. They’re comprehensive pages that answer “what is it,” “who uses it,” “how to choose,” and “pricing” in a single URL.
Word count seems to matter, too. Pages above 20,000 characters average 10.18 citations each. Pages under 500 characters average just 2.39.
The lesson: If AI systems have thin data about your brand, you get thin mentions.
3. Authority signals
AI systems don’t just cite sources. They characterize them by tone, which reveals how much confidence the AI has in your authority.
HubSpot’s AEO Grader, launched in early 2026, classifies brands into competitive roles: leader, challenger, or niche player. They’re positioning labels that determine how persuasively AI presents you.
Semrush’s awards data showed that category leaders have less than 20% monthly volatility in AI share of voice. Once AI systems establish you as a leader, that perception tends to stick.
The language reflects this correlation.
Leaders get described with confident phrasing, such as “the industry standard” and “widely recognized.”
Challengers get “growing alternative” and “gaining traction.”
Most brand mentions in AI answers are neutral or positive. But neutral isn’t the same as enthusiastic.
The difference between “also offers project management features” and “considered one of the top three project management platforms” is authority signaling.
The lesson: AI doesn’t just say your name. It frames your reputation.
Comparative positioning is the closest thing to traditional rankings in AI answers: how you’re positioned when multiple brands appear together. But instead of Position 1 vs. Position 2, it’s “better for X” vs. “better for Y.”
In banking, Bank of America leads with 32.2% visibility, SoFi follows at 25.7%, and LightStream captures 20.2%.
In healthcare, Mayo Clinic dominates at 14.1%.
Kevin Indig’s Growth Memo research revealed a critical nuance. When AI positioned a brand as “best for startups” versus “best for enterprises,” users self-selected based on that framing, even if both brands technically served both segments.
The lesson: You’re not competing for position 1 anymore. You’re competing to own a specific positioning niche in AI’s mental model of your category.
How traditional rank correlates with AI visibility (barely)
We already covered the 38% overlap stat. The interesting question is why it dropped so fast. The answer: query fan-out.
When an AI Overview triggers, Google doesn’t just evaluate the top-ranking pages for the user’s actual query. It breaks the question into multiple sub-queries, retrieves relevant passages from across its index, and synthesizes them into a single response.
Your page might rank No. 1 for “best project management software” and still get skipped. The AI pulled from pages ranking for “project management for remote teams” or “integrations with Slack” instead. One query to the user. A dozen queries behind the scenes.
SE Ranking’s February 2026 research found that Google’s upgrade to Gemini 3 replaced approximately 42% of previously cited domains and generates 32% more sources per response than its predecessor. Traditional ranking positions became even less predictive overnight.
Where AI traffic actually goes
Semrush’s analysis of 17 months of clickstream data reveals an unexpected pattern: Over 20% of ChatGPT referral traffic goes to Google. That share rose from roughly 14% at the start of the study to more than 21% by early 2026.
The biggest beneficiary of ChatGPT’s growth is Google.
Users go to ChatGPT to get an answer, then head to Google to confirm findings or research brands they just discovered. For users, they’re complementary steps in a single journey.
Most ChatGPT prompts don’t match traditional search language. Between 65% and 85% of prompts couldn’t be matched to any traditional search keyword in Semrush’s database of 27 billion keywords.
A traditional Google search: “best project management software.”
The ChatGPT equivalent: “I manage a 12-person remote engineering team, and we’re constantly missing sprint deadlines. What should I change about our weekly standups?”
That level of specificity doesn’t exist in keyword databases — and it’s becoming more common.
Measuring visibility in AI answers
If position doesn’t matter the way it used to, what does?
Citation frequency replaces rankings as the primary metric. How often does your brand appear when AI systems answer questions in your category?
Brand mention rate measures penetration. If AI generates 100 answers about your category, what percentage mention your brand? Scores above 70% indicate strong AI search performance. Below 30% signals significant visibility gaps.
Recommendation rate matters more than mention rate for B2B SaaS and high-consideration purchases. Being recommended carries more weight than being mentioned in a general list.
Sentiment and context determine whether mentions drive action. Track how AI describes you: premium vs. cheap, advanced vs. beginner, reliable vs. experimental.
Citation position within answers creates measurable advantage. Unlike traditional rankings, you can be first-cited without being first-ranked organically.
The measurement infrastructure you actually need
Traditional rank trackers can’t measure these signals.
The 2026 measurement model requires parallel tracking. Traditional SEO metrics still matter for the portion of search that remains blue links. AI visibility requires tracking how often your brand appears and how it’s represented in AI-generated answers.
A new category of tools has emerged to support this shift.
For citation tracking, platforms like Profound, Gauge, Peec AI, and Scrunch monitor which URLs get cited across ChatGPT, Perplexity, Claude, and Google AI Overviews.
For brand analysis, tools like Semrush’s AI Visibility Toolkit and AthenaHQ measure how often your brand is mentioned, how it’s described, and whether it’s recommended.
For competitive positioning, Bluefish and HubSpot’s AEO Grader evaluate how AI systems categorize your brand relative to competitors.
None of these tools replace traditional SEO infrastructure. They supplement it.
The ranking obsession isn’t going away entirely. Traditional search still drives traffic. But measuring success solely through rankings misses the larger shift.
AI answer engines now act as gatekeepers, surfacing only the brands they consider citation-worthy.
Visibility depends on how often you’re included, how you’re described, and how you’re positioned relative to competitors.
Traditional rank trackers can’t capture that. It requires a different measurement model. That’s what determines visibility now.
The March 2026 core update brought what Google describes as a design “to better surface relevant, satisfying content for searchers from all types of sites.” This confirms the simplest truth in search: people use Google to get answers.
Whether it’s solving a problem, learning something new, or making a decision, searchers want content that is genuinely helpful in their busy, on-the-go lives. If your content does that, it succeeds. If it doesn’t, no amount of SEO tricks, hacks, or magic bullets will get your content to show up on page one, let alone in AI Overviews.
How modern search systems surface helpful content
AI Overviews went from appearing for just 6.49% of queries in January 2025 to 15.69% in November 2025 according to a Semrush study. Depending on the source today, AI Overviews appear for 25-50% of queries.
It’s clear that search engines and LLMs are working together more efficiently today than just a year ago. Fast forward another year, and we can only imagine.
For any SEO focused on creating helpful content and understanding user intent, it’s a truly exciting time to be in the industry. Your genuinely useful content can be surfaced in AI Overviews using retrieval-augmented generation (RAG) and query fan-out.
RAG: Instead of just relying on what it “knows,” AI looks for relevant information across multiple sources before answering a query
Query fan-out: One search query can be broken down into multiple related queries behind the scenes, helping AI and search engines build a more complete, useful response
Entire papers have been written on these two concepts alone. The TL;DR is that SEO today is about more than just keywords or counting backlinks. Modern search is designed to connect searchers with content that actually answers their questions and satisfies user intent.
Why this raises the bar for SEO in 2026 and beyond
These systems, and those still being implemented (see Google’s blog on TurboQuant), are getting better at recognizing and dismissing thin, duplicate, or superficial content. Pieces that simply restate what someone else has already said online, lack originality, and fail to demonstrate legitimate real-life experience will continue to struggle to rank.
Depth, clarity, and expertise have always mattered, but SEOs who want to continue to succeed in 2026 and beyond are going to have to double down on these factors:
Depth: This doesn’t mean write as much as you can on the topic. Gone are the days of fluffy, keyword-stuffed articles. Depth in 2026 means SEOs and content creators should address the searcher’s main question and related follow-ups.
Clarity: Searchers are busy. They want quick answers. Make your content easy to scan and understand.
Expertise: Demonstrate real-world knowledge and experience your audience can trust.
For many SEOs, this is a welcome shift. It’s not about just checking off boxes anymore.
Sure, we still have to do those things. But the bar for what constitutes good SEO is being raised far beyond the basics. When search engines evaluate content today, they’re looking for signals that SEOs and content creators are providing real value to searchers.
Why visibility matters more than clicks for local SEO
Small, local, or service-based businesses that rely on SEO-driven leads for revenue can use these same strategies, too. While success isn’t measured using the same metrics as it was just a couple of years ago, the result of good SEO remains: Get the business recommended before the competition for as many searches as possible.
Two years ago, this meant clicks. Today, it means visibility. AI platforms like ChatGPT, Gemini, and AI Overviews often recommend businesses without linking to websites directly, if at all.
A few tools have been developed to measure AI metrics, but these can get pricey, and as Elizabeth Rule said, “Measuring visibility is like trying to measure a wave with a ruler.”
This is why maintaining strong communication between stakeholders and the SEO team is so important. When success can’t be measured simply, a simple question of “how’s business going?” matters now more than ever. Beyond user intent, SEOs need to understand user behavior, mood, and temperament.
What ‘helpful content’ looks like in practice
Here are five tips to get you started on creating content that is genuinely helpful:
1. Answer follow-up questions
Think beyond the initial query. What will readers ask next?
One of my favorite places to do research for this is the People Also Ask (PAA) section on SERP. For example, you’re writing about herniated disc treatment. Just Google “herniated disk treatment” and use the PAA feature to help you brainstorm more questions your audience may ask about the topic you’re writing. The more questions you click, the more ideas it’ll generate.
2. Show expertise and experience
E-E-A-T is an SEO hill I will die on because it works. Share your knowledge, case studies, testimonials, or firsthand insights. This builds trust when done right and when you’re creating for people, not search engines.
We’d all love to believe that everything we write is being read word-for-word. It’s not. People skim. They’re looking for an answer while they’re doing other things.
This is why clearly structured web pages are so important on both mobile and desktop. Use headings, bullet points, and concise paragraphs to help readers quickly find answers.
4. Be authentic
Authenticity sounds like a buzzword (and maybe it is), but people can tell when you’ve used AI to write something or when you’re just publishing content for SEO.
Much as it pains me (an English major who loves to read long novels and write dissertations) to say, no one cares about your personal anecdotes or how many adjectives you can think of for your “superior” service. They just need an answer to the question they searched.
Avoid fluff or filler. Real-world, practical content resonates better than generic advice.
If someone called and asked you, “How long does it take to change the water heater in my 1950s home?” You wouldn’t need 1,500 words to answer them. The content you create on the internet should be the same.
5. Ask ‘who, what, and how?’ about your content
If you’ve been paying attention to GEO/AEO/SEO for AI, this might sound familiar to you as a little something called semantic triples. This sounds intimidating at first, but it’s really just sixth-grade English.
A semantic triple answers who, does what, for whom (or how). Remember diagramming sentences? It’s the relationship between the subject, predicate, and object. It can be any subject, predicate, and object:
The plumber installs water heaters in Dallas
The bakery bakes wedding cakes for couples
I first heard about semantic triples from Mike King during SEO Week 2025 when he broke down his concept of relevance engineering. If you haven’t watched his video on this topic, I highly recommend it.
The basic idea is that SEO is about your audience:
Who are you talking to?
What do they need?
How do you reach them?
A semantic triple answers these questions. It provides structure and clarity. It’s the “Who, What, and How” that Google told us about with the HCU documentation. It’s also genuinely valuable information for searchers.
Knowledge is your superpower. You’re the only person who can tell your story, explain your process, and show readers why your business or brand matters.
The most reliable SEO strategy remains the same with each new core update from Google: Create content that genuinely helps searchers.
Focus on the problems your audience is trying to solve, answer their questions fully, and share your expertise. Thin or derivative content won’t cut it in a world of AI-driven search and retrieval systems.
Google and AI platforms are trying to do the same thing searchers are doing: find the most helpful content. If you respond to that need, your content will rise to the top, no tricks, hacks, or shortcuts necessary.
More than 40% of agentic AI projects will be canceled by the end of 2027. That is a prediction from Gartner published in June 2025, based on a poll of more than 3,400 organizations actively investing in the technology.
The reason cited is not that the agents do not work. It is that the humans deploying them are making the wrong decisions. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” according to Anushree Verma, senior director analyst at Gartner.
Organizations are deploying agents without a clear strategy, without understanding the complexity, and without the governance to manage what happens when something goes wrong.
In other words, the agent is only as good as the human behind it.
This matters enormously for marketing. AI agents in marketing are real, accelerating and in many cases, necessary. Agents that select audiences. Agents that generate content. Agents that optimize send times, choose offers and orchestrate entire customer journeys autonomously, continuously and at a scale no human team could match. The capabilities are here today and growing rapidly.
But Gartner’s data reveals a warning and marketing leaders who miss it will find themselves on the wrong side of that 40%.
FOMO causes agent failure
The failure rate Gartner describes is not random. It starts with fear.
Fear of being left behind. Fear of watching competitors move faster. Fear of being the CMO who did not act when everyone else did. That fear is driving organizations to deploy agentic AI, not because they have a strategy, but because they cannot afford to be last.
The result is agents built on broken workflows. Agents fed with poor data. Agents operating without the governance structures that keep them aligned with business goals. The agents execute… the wrong things, in the wrong ways, at the wrong times.
FOMO is not a strategy. And in the agentic era, it is an expensive mistake.
Agent washing
Gartner identified a widespread trend it calls “agent washing”… vendors rebranding existing chatbots and automation tools as agentic AI without delivering genuine autonomous capabilities. Of the thousands of vendors claiming agentic solutions, Gartner estimates only around 130 offer real agentic features. Marketing teams investing in the rest are not getting agents. They are getting dressed-up automation with an agentic price tag. automation with an agentic price tag.
The consequences go beyond wasted budget. Gartner predicts that in 2026, one-third of companies will harm customer experiences by deploying AI prematurely, eroding brand trust and damaging both acquisition and retention.
A personalization agent that misreads a customer. A content agent that violates compliance. A journey agent that floods a churning customer with offers at exactly the wrong moment. These are the predictable outcomes of deploying autonomous systems without the human judgment to direct them.
Half of all organizations are watching their people get dumber because AI is always available to think for them. Quietly. Gradually. Until the day the algorithm is wrong and nobody in the room can tell.
In marketing, that is a crisis. Marketing requires judgment — the ability to ask not just what the data says, but what it means. Not just whether a campaign worked, but why. Not just whether to accept an AI recommendation, but whether it reflects the brand, the moment and the relationship the company is trying to build.
Those questions cannot be delegated to an agent. They require a human being scrutinizing what a machine thinks is right.
The most dangerous marketer in the agentic era is not the one who rejects AI. It is the one who accepts everything it produces without question.
Agents cannot be trusted to ask the right questions
An agent can optimize what it has been given. It cannot question whether it has been given the right thing.
It can personalize a message based on behavioral signals. It cannot decide that the right move is to say nothing at all… to give a customer space, to protect a relationship rather than extract from it.
It can generate a thousand content variations and test them. It cannot feel the difference between a message that converts and a message that connects. It cannot sense when a campaign that performs well in the data is quietly damaging the brand.
It can execute a journey flawlessly. It cannot design one that reflects what customers actually want from this brand, at this point in their lives.
These are not limitations that will be solved by the next model release. They are structural. AI is trained on the past. The irreducible human job in marketing is to bring judgment about what should happen next, even when the data does not yet exist to support it.
The marketer as manager of agents
The right mental model for the agentic era is not human versus machine. It is a human plus machine, with the human in charge.
That is the foundation of Positionless Marketing. For decades, marketing teams operated as an assembly line with handoffs. Positionless Marketing breaks that model by giving marketers three transformative powers: Data Power to immediately discover customer insights for precise targeting and hyper-personalization, without waiting for engineers; Creative Power to create channel-ready assets like copy and visuals, without waiting for creatives; and Optimization Power to run campaigns that optimize themselves through automated journeys and testing, without waiting for analysts. Handoffs are eliminated.
The Positionless Marketer is a multidisciplinary thinker who deploys AI agents to go beyond traditional positions. Agents handle what used to require waiting for three different teams, eliminating the assembly line. The marketer is no longer waiting on anyone. They are thinking bigger, moving across disciplines while keeping human judgment at the center of every decision the agents make.
This is a promotion, not a replacement. But it comes with real demands. Marketers who can think strategically, not just operationally. Who can evaluate AI output critically, not just accept it. Who can take accountability for what the agents do in their name.
Gartner’s Daryl Plummer stated it directly: organizations should prioritize behavioral changes alongside technological changes as first-order priorities. The technology is ready. The question is whether the humans in the marketing organization are.
The window is narrowing
The organizations that will win the next decade of marketing are not the ones that deploy the most agents. They are the ones that build the human capability to direct them well.Gartner’s 40% prediction is not a warning to slow down. It is a warning to be deliberate. The difference between an agentic marketing operation that compounds value over time and one that wastes budget, violates policy, and erodes customer trust is not the technology. It is the human judgment sitting above it.
Marketing teams need to face facts in the agentic AI era: the agent is only as good as the indispensable human behind it.
LinkedIn is rolling out Off-Platform Event Ads, giving marketers a new way to promote events without needing a native LinkedIn Event Page.
What’s happening. The new format allows advertisers to run Event Ads that link directly to external destinations — such as webinar platforms, landing pages or livestream sites — instead of keeping traffic on LinkedIn.
This marks a shift from platform-contained experiences to more flexible, marketer-controlled journeys.
How it works. Marketers can create an Event Ad using a third-party URL, add event details like date and format, and choose from objectives including awareness, engagement, traffic or lead generation.
Clicks send users directly to the external event page, while performance metrics remain trackable in Campaign Manager.
Why we care. Until now, promoting events on LinkedIn often meant working within platform constraints, which could fragment the user journey and limit control over registrations.
Off-Platform Event Ads remove that friction by allowing marketers to tap into LinkedIn’s targeting while keeping traffic, data and conversions on their own platforms — making it easier to scale campaigns and maintain a consistent experience.
What to watch:
Whether this drives higher registration rates compared to native Event Pages
How advertisers balance LinkedIn targeting with off-platform conversion tracking
If LinkedIn expands similar flexibility to other ad formats
Availability. Off-Platform Event Ads are currently rolling out globally and are expected to be available to all advertisers by May 6.
Bottom line. By opening Event Ads to off-platform destinations, LinkedIn is making it easier for marketers to scale event promotion — without forcing them to build inside its walls.
Every brand holds its claims, and somewhere in the archive of its digital life, there’s proof to back them up.
The AI assistive engine (the systems behind ChatGPT, Perplexity, and Google AI Overviews) holds that proof too, scattered across its training data and retrieval index, alongside competitors’ claims.
The audience has a need but no vocabulary to bridge the gap between what they want and what the brand or the engine already knows.
All three lack the same thing: a frame, the interpretive context that turns scattered information into a narrative worth transmitting (for the brand), citing (for AI), and acting on (for the user).
This is where the claim-frame-prove (CFP) process comes in. Claim and proof are mechanical, and frame is strategic.
Claim and prove are mechanical work the engine can verify. Frame is the strategic move only the brand can make.
Why AI can’t make the leap your brand needs
CFP runs claim by claim, fact by fact. The brand’s full picture is built from many CFP cycles compounding: each claim framed and proven becomes a fact in the corpus, and the cumulative weight of those facts is what positions the brand to dominate.
AI can join known facts, but it can’t leap to a new one that benefits your brand. Given Facts A and B, AI can derive Conclusion C that follows logically. That’s standard inference, and the engines do it well.
What they can’t do reliably is the move a creative human makes constantly: looking at A and B, reaching to a non-obvious J that’s commercially beneficial, and then constructing the logical bridge so the engine can use it.
From here, I’ll use the alphabet as the scale: C is the obvious commercial reach, J is the non-obvious, and Q is the most ambitious leap a brand can sensibly make.
Two examples from my own positioning make the difference concrete.
A + B → C
A: I coined answer engine optimization in 2017. B: I run a brand engineering company.
The obvious conclusion (C) the engine produces unprompted: my work is connected to AEO implementation. True, useful, but close to the facts, and the engines will get that right without my help.
A + B → J
From the same A and B, reaching further. J: I’m the practitioner most likely to have learned what works and what doesn’t from a decade of operational data nobody else has.
Same starting facts, completely different commercial outcome, and the engine doesn’t make this leap on its own. It needs me to build the bridge for it.
That second move — from A and B to J — is strategic claim bridging: selecting which non-obvious J benefits the brand from the space of derivable conclusions, and then constructing the logical connection from accepted facts to that chosen J so the engine transmits it as fact rather than as the brand’s opinion of itself.
Two operations packed into one move: the strategic part is choosing J, and the bridging part is making the inference watertight.
AI won’t choose what’s best for your brand
AI doesn’t choose the J that’s good for your brand. You do. That choice, and the bridge that proves it, is the work AI has no commercial stake in, and a future (more capable) AI without your stake just produces a more sophisticated version of the same problem.
Whether AI can be creative is contested ground. The narrower claim holds regardless: even when AI produces a novel-looking output, it has no commercial intent guiding which J to derive. From the same A and B, an AI could just as easily produce a damaging J as a beneficial J. It has no skin in your commercial game.
A creative marketer does both things at once: reaches imaginatively to a non-obvious J, and chooses the J that serves the brand. That’s the move AI engines can’t reach, and it’s why the frame has to come from someone placing the information online (the brand, a client, or an independent source).
The disposition that lets you see this work is what I’ve been calling “empathy for the machine,” a phrase I started using in client consulting around 2011-2012 (originally as “empathy for the beast,” retired once I got more serious about the business side of digital marketing), and first published formally in 2019.
It’s the discipline of stepping outside your own perspective to see what the machine actually struggles with. That advice applies to anything in SEO/AAO — in this case, specifically to when it grounds, attributes, and synthesizes claims about your brand.
Unfortunately, brands all too often produce material aimed at human readers and assume the machine will figure out the rest. With a little empathy for the machine, brands design material the machine can use as its own interpretation (feed the beast).
This produces three different levels of brand-AI communication, each one building on the previous.
Levels 1 and 2 are the foundations every brand needs in place, and Level 3 is where framing enters, and what this article is designed to change your thinking.
Proof exists, but there’s nothing linking it to the claim. This is where most brands sit, and it leaves the engine to perform inference over whatever it can find.
The brand publishes Claim A on its website. Proof Z exists somewhere else: a conference program, an industry database, a Wikipedia citation, and a trade publication from four years ago. The brand assumes the engine will connect the two.
To connect them, the engine has to perform inference. Can it derive the conclusion that this brand is credible for this claim, given scattered premises across different domains, formats, and varying source authority?
There’s no copy stating the connection, no hyperlinks pointing from claim to proof, and no schema encoding the relationship.
That depends almost entirely on how confidently the machine already understands the entity, and that runs on three sub-levels.
If the machine has no confident understanding of the brand, and the proof isn’t explicitly linked, no connection happens. The proof might as well not exist.
If the machine has no confident understanding of the brand, but the proof is explicitly linked, the connection happens because the link does the work that the entity resolution couldn’t.
If the machine has a strong, confident understanding of the brand, the connection happens even without the link, because a well-resolved entity shortens the logical distance the machine has to traverse (linkless links, as I’ve called them).
The link still adds confidence (more than one path always does), but it’s no longer load-bearing as the entity carries the work.
The implication runs through the rest of the pipeline. Entity clarity in the knowledge graph isn’t a nice-to-have sitting alongside content work. It’s the variable that decides whether your content work has to carry all the weight or almost none of it.
Any proof that isn’t explicitly linked is missed at sub-level one, caught at sub-level two, and confidently embedded at sub-level three.
When entity understanding is weak, the result is familiar to anyone tracking AI visibility: a meritorious brand appears occasionally, and when it does, the wording is hedged, and the brand sits mid-to-low-pack. The engine did the best inference it could, and, being a responsible probability engine, it hedged.
Worse, opportunities for inclusion are throttled across adjacent queries the fact should have pulled the brand into, because the fact was never connected to the proof that would have warranted the inclusion in the first place.
What happens when Level 1, scattered proof of claims, is done well? Brand X is infrequently mentioned, unconvincingly, as a provider of Y.
Level 2: Connected proof of claims
Here, the brand explicitly connects claim to proof through a combination of copy, hyperlinks, and schema. It also closes the inference gap by providing what the engine would otherwise have to figure out.
The brand publishes Claim A and explicitly connects it to Proof Z, with the logical thread stated in copy, anchored by hyperlinks to the proof, and encoded in schema: a fact with a significant number of supporting pieces of evidence joined to it three ways, leaving nothing for the engine to infer.
Connected proof of claims is a spectrum, not a switch. At the low end, you’ve connected some of your proof, which already beats Level 1 because the engine no longer has to figure out the connections you’ve made, but it’s still figuring out the ones you haven’t.
If your competition has connected more of theirs, you’re still losing the comparison on the proof you left scattered. At the high end, you’ve connected all of it: every claim joined to every piece of supporting evidence, nothing scattered, and nothing left for the engine to guess at.
Most brands sit somewhere between scattered and connected simply because they’ve connected only the most obvious proof, and the AI may well have already figured the obvious ones out for itself: the links don’t teach it anything it didn’t already know.
With connected proof of claims done comprehensively for a given claim, the engine has enough corroboration to back the brand confidently, and the claim becomes fact in the corpus. Confidence transfers cleanly because there’s nothing to guess at.
Connected proof of claims is also a great weapon for a smaller brand competing with a bigger one: a specialist accounting firm with 50 pieces of proof, all explicitly connected to a specific positioning, beats a Big 4 with thousands of unconnected pieces on that specific positioning, because connection is what turns proof into substance that the engine can transmit.
What happens when Level 2, connected proof of claims, is done well? Brand X is frequently mentioned convincingly as a provider of Y.
This is where framing enters, and where strategic claim bridging earns its name.
For each claim that matters, the brand publishes Claim A, connects the proof, and then does the thing the engine can’t do (and the audience is unlikely to do either, for that matter).
It reaches the non-obvious J that benefits the brand, and constructs the bridge from A and B to J in language the engine can transmit. Not merely “we are the leader in X, demonstrated by Y,” but the frame:
Why Y matters for the specific problem this audience faces.
What Z signals about trust in this particular market.
How W translates to the outcome the prospect actually cares about at the moment of decision.
A frame is a logical inference from corroborated facts, where the brand chose where the inference would land. For example:
“Jason Barnard coined answer engine optimization in 2017, made dated public predictions about how the field would unfold, and those predictions came true, his predictions about where the field is going next are credible.”
Every component is verifiable independently, and every connection between components is logical. The J the bridge reaches to is the one I chose, not the J the engine would have generated unprompted.
One well-constructed frame makes one claim into fact in the AI’s voice. Run that across the claims that matter, and the cumulative weight is what shifts a brand from “frequently mentioned convincingly” to “almost always mentioned as the leading provider”: dominance is a stack of well-framed facts, not a single masterstroke.
The result: the AI doesn’t merely confirm, it enthuses. “Brand X leads in Y, and here is why that matters for your situation.”
The engine transmits the frame wholesale, in the language you chose, to the audience you specified, with a reason to keep coming back. The machine didn’t generate the narrative; it relayed it warmly.
What happens when Level 3, framed proof of claims, is done well across the claims that matter? Brand X is almost always mentioned as the leading provider of Y, and dominates the space.
Each level builds on the previous: connected proof of claims requires scattered proof of claims connected, and framed proof of claims requires connected proof of claims bridged strategically.
Most brands are only halfway to framed proof of claims
The brands that think they’re at framed proof of claims are usually at framed proof of claims for humans, and scattered proof of claims for machines. Marketing and narrative work supplies frames to humans all the time, and plenty of brands do it well.
What almost no brand does is supply frames the machine can use, and the gap between the two is where framed proof of claims is most powerful.
Some brands operate below even that and are effectively standing still: published facts at the surface, few proof connections, and no interpretive content the machine can use for any purpose.
The signature objection from a standing still brand is the same in every consulting room: “We already do this, our website explains who we are.” The website does that. The website is doing zero work to help the machine with framing.
The cost of standing still isn’t visible until a model update or two down the line. Brands that think they’re at framed proof of claims are usually investing harder in the wrong layer (content), while the layer that matters (framing and, ideally, joining the dots) compounds for someone else.
The gap widens every year. If you have content that doesn’t frame effectively or join the dots with links to proof, you’re leaking huge value, and pushing through connection and framing is the best return on past investment you can make right now: you’re doing the heavy lifting for the machines, and they’ll reward you for giving them this extremely valuable context on a plate.
Three structural conditions separate framed proof of claims from marketing-and-narrative-as-usual, and missing any one collapses the brand back to connected proof of claims or lower.
The entity has to be well-established, well-resolved, and trusted, because a frame can’t anchor to a vague brand. The underlying proof has to be connected, because most brands have fluent marketing prose on top of scattered proof, which is scattered proof of claims with prettier wallpaper.
The bridge itself has to be strictly logical, because machines read logic first and tone second, and a logically broken bridge fails, however well it’s written.
The better AI gets, the more framing matters
Smarter AI rewards better framing rather than replacing it, and the reason is the same selection pressure SEO practitioners have been operating under since the early 2000s.
There’s a seductive and entirely wrong conclusion to draw from rapid improvement in AI reasoning: that engines will eventually figure out how to frame brands correctly without help. The opposite is true. The engine rewards the brand whose assets reduce its own workload for the same or better result.
Search engines reward sites that are easy to crawl, render, and classify. Knowledge Graphs reward entities that are easy to resolve. AI assistive engines reward content that is easy to ground, verify, and transmit confidently. Where the engine has to choose between two roughly equivalent candidates, the candidate that demands less computation, less inference, and less guesswork wins.
Framed proof of claims is that principle operating at the bridging layer. A more capable engine encountering this level has the bridge handed to it ready-made. It doesn’t have to figure out the frame, it transmits the bridge the brand supplied, fluently and confidently, with the engine’s full reasoning capability now amplifying rather than substituting for the framing work.
A more capable engine without a frame falls back to inference over scattered evidence, which is expensive, ambiguous, and produces hedged output. Every improvement in reasoning capability makes the hedging more detailed and the noncommittal language more sophisticated, but the underlying problem isn’t capability, it’s the absence of a frame to amplify. The engine is doing more work for a worse result, and that’s the exact failure mode the engine’s selection pressure is designed to penalize.
The gap between those two outcomes is the framing gap, and it widens with every generation. Brands implementing only connected proof of claims don’t lose ground in absolute terms, they lose ground relative to brands implementing Framed Proof of claims faster every year, because the engine increasingly rewards assets that let it deploy its growing capability productively rather than waste it on guessing and hedging.
The selection pressure that rewarded fast websites in 1998, clean HTML in 2003, and structured data in 2015 rewards framed proof of claims now. The mechanism of gaining a competitive advantage by reducing costs for the AI for the same or better results hasn’t changed — and probably never will.
The framed proof of claims trajectory rises steeply and continues climbing. The connected proof of claims trajectory rises gently and flattens. The shaded area between the two lines is labeled the framing gap and visibly widens with each generation.
The bridge is human territory, and it stays human because it requires commercial intent specific to the brand that the engine doesn’t have.
Everything the machine does well will get better: retrieval, connection, pattern extraction, and synthesis. None of that helps the brand whose evidence the machine can see but can’t bridge meaningfully to a beneficial conclusion.
Whether AI confirms your brand, overlooks it, or champions it comes down to one discipline: strategic claim bridging, claim by claim, fact by fact. It’s the last layer of brand-AI communication that won’t yield to automation, if it yields at all.
Wil Reynolds, founder and CEO of Seer Interactive, is challenging SEOs to rethink what success looks like in a world increasingly shaped by AI.
In his SEO Week session, “SEO is a performance channel, GEO isn’t. How do you pivot?”, Reynolds said many marketers are focused on the wrong outcomes — and producing work that people don’t believe.
Marketing isn’t just about being seen
Reynolds opened by pushing back on the idea that visibility alone is the goal of marketing.
“Marketing was never just to be seen or be visible,” he said. “You had to turn that visibility into something — believing something about your brand… And then they ultimately have to choose you.”
He described a progression that marketers need to focus on: being seen, being believed and being chosen.
“It’s how you take your time with people, and turn them from seeing you, into believing something about you,” he said.
“I got the ranking, job finished,” he added. “Job’s not finished.”
Reynolds also questioned the value of surface-level success metrics.
“I got a lot more followers, but they don’t pay you,” he said.
Low-quality marketing is everywhere
Reynolds pointed to common marketing tactics — including automated outreach — as examples of work that doesn’t create value.
“That’s not marketing,” he said, referring to spam-like SMS messages.
Those tactics made him reflect on his own past work, he said.
“I started looking at the stuff that I used to do… was that really marketing?” he said.
“Some of us are strategists. Some of us are loopholists,” he said. “You’ve got to make a decision today.”
The industry is producing ‘zombie content’
Reynolds criticized the widespread use of scaled, templated content designed primarily to rank.
He used broad listicle-style pages as an example.
“Why would you write content saying best restaurants in Minnesota when nobody that’s a human looks for the best restaurant in Minnesota?” he said.
He described this type of content as “zombie content.”
“That’s what we do,” he said, describing how marketers repeat what already ranks instead of doing something different.
He also described how many marketers approach content creation.
“I’m going to look at the top 10 and look at what they did slightly wrong… and I’m only going to do it slightly better,” he said.
Short-term tactics vs. long-term brand building
Reynolds contrasted short-term SEO tactics with long-term brand building.
“Some people like to win in decades,” he said. “Other people like to win quarter to quarter.”
He described how many teams focus on immediate results.
“What works this quarter to get my boss off my back long enough so I can survive the next quarter?” he said.
That approach leads to work that people don’t actually want, he said.
“You will never produce a thing that anyone wants if you continue to play that,” he said.
SEO success doesn’t translate to AI visibility
Reynolds shared an example involving “ethical jeans” to show how SEO and AI results can differ.
One brand ranked well in Google without being known for ethical practices, while another brand that invested in ethical production ranked much lower.
In AI-generated answers, that outcome changed.
“If that worked, if it was the same, that brand would be showing up in AI models,” he said. “And they showed up in none.”
He connected this to credibility.
“Nobody believed them,” he said. “Nobody chose them.”
Visibility without belief doesn’t lead to outcomes
Visibility alone isn’t enough, Reynolds said.
“If you have all the visibility in the world and people don’t believe you or trust you, then you’re not going to get chosen,” he said.
Visibility is only part of the process, he said.
“This visibility is just an opportunity,” he said. “That’s all it is. … Iit is not the job to be done.”
What people say matters
Reynolds suggested looking at platforms like Reddit to understand how people actually talk about brands.
“Go to Reddit… look at all the brands,” he said. “You find out that humans don’t believe you. And they have to pay you for you to stay in business.
He contrasted that with how brands present themselves in content.
“Not only did they not think you’re number one — they don’t think you’re number 100,” he said.
The wrong metrics are being measured
Marketers often focus on metrics that are easy to track rather than meaningful, Reynolds said.
“We’re measuring the easy stuff to measure,” he said. “The real work is in the hard-to-measure stuff.”
He encouraged comparing visibility metrics with signals tied to outcomes.
“If your visibility is skyrocketing and your pipeline is flat, that’s bad,” he said.
Watching real users changes the picture
Reynolds described research his team conducted by observing real people using AI tools.
“When you actually watch people do the job… your eyes open so much wider,” he said.
One person typed four words, while another typed more than 100 words for the same task, he said.
He also noted that AI tools often suggest additional steps or actions beyond what users ask for, and people frequently accept those suggestions, he said.
Start with your brand
Marketers should focus on how their brand appears in AI-generated answers, especially for branded queries, Reynolds said.
“You spend all this money trying to get people to know your brand… and then you don’t want to make sure that answer’s right?” he said.
AI can shape your brand narrative
Reynolds shared an example where AI-generated responses surfaced incorrect information about his company.
“So now it’s showing up everywhere,” he said.
He described responding by publishing content to address the claim directly.
“If it’s false, then I’ve got to fight that,” he said.
There is too much content
“There’s too much content out there,” he said.
He described shifting his approach.
“I’m trying to become a curator,” he said.
Rethinking performance
Reynolds shared examples of how different traffic sources perform.
“My direct converts 1.5 times better than my SEO,” he said. “My social, five times better.”
A final question for marketers
Reynolds ended by asking marketers to rethink their priorities:
“Are you willing to sacrifice a little bit of this visibility game to be more believable?”
One of the most dependable ways to grow organic visibility was to publish more content. Expanding into the long tail and creating pages around different variations of a topic often led to steady traffic growth.
Many SEO teams still operate with this mindset. Content calendars are built around search volume targets, and growth is often equated with how much new content is produced. The problem is the results no longer reflect the effort.
In many cases, adding more pages doesn’t lead to increased visibility and can even dilute overall performance. Large content libraries are harder to maintain, compete internally, and often result in fewer pages surfacing in search results.
The challenge is no longer producing more content, but understanding why much of it fails to contribute to visibility.
Why content volume worked for SEO
For a long time, increasing content volume was a rational and effective strategy. Search engines relied heavily on keyword matching and topical coverage, which meant expanding into the long tail created more opportunities to capture demand.
Competition was also significantly lower, and many queries had limited high-quality results, so publishing across a wide range of keyword variations often led to quick visibility gains. In this environment, covering more topics translated directly into increased traffic.
Publishing frequency also helped strengthen domain authority. Sites that consistently added new content signaled freshness and relevance, which improved their ability to compete in search results.
This approach was further amplified by programmatic SEO. By creating scalable templates and targeting large keyword sets, companies generated thousands of pages and captured traffic at scale.
Most importantly, this strategy worked because it aligned with how search engines evaluated content at the time. Expanding coverage increased the likelihood of ranking, and more pages meant more opportunities to be discovered.
However, the conditions that made this approach effective have changed. As search ecosystems have evolved and competition has increased, the relationship between content volume and visibility has become less predictable.
Most commercially relevant topics now have dozens of established pages competing for the same queries, many with years of accumulated links and behavioral data.
A new page enters this environment at a disadvantage because the keyword spaces it targets are already consolidated around results with existing authority and signal history.
Diminishing returns
As sites expand into adjacent keyword variations, search engines increasingly route similar queries to the same URL rather than distributing traffic across multiple pages.
This shows up in Google Search Console as two or three URLs splitting impressions on identical queries — neither ranking strongly because neither has consolidated authority. The intent overlap that content teams treat as coverage, Google treats as redundancy.
Changes in search experience
AI Overviews now appear across a significant and growing share of informational queries. Google has confirmed continued expansion of the feature across search types and markets. Informational content is the most affected by this shift, and it’s also the type most volume strategies produce.
A site with a large number of blog articles is therefore more exposed than one focused on a smaller set of transactional pages. More ranked pages don’t produce proportional traffic when an increasing share of visible positions no longer generate a click.
Indexing limits
Google’s budget documentation states directly that low-value URLs drain crawl activity away from pages that matter. At scale, thin or redundant content is deprioritized — meaning a significant percentage of a site’s published pages may never meaningfully enter search competition regardless of how much continues to be added.
What’s less understood is how content libraries behave at scale. These are system-level problems that compound over time and are difficult to reverse.
Content debt
Every page published creates an ongoing obligation. It needs to be monitored for ranking decay, updated when information changes, evaluated periodically for pruning or consolidation, and factored into crawl allocation. These costs are rarely accounted for at the point of creation.
At low volumes, this is manageable. At scale, it becomes a compounding liability. A site with 2,000 articles isn’t sitting on 2,000 assets, it’s managing 2,000 maintenance commitments that depreciate at different rates.
Editorial resources that could strengthen existing high-performing pages are instead absorbed by keeping a growing library from becoming a liability.
The true cost of a volume-driven content strategy only becomes visible 18 to 24 months after the investment, when maintenance demands begin to outpace the capacity to meet them.
Crawl inefficiency and cannibalization
Google allocates a finite crawl budget to each domain. When a site scales content volume without proportional gains in quality or authority, Googlebot distributes that budget across a larger number of pages, many of which offer limited signal value. The result is that high-value pages are crawled less frequently, indexed less reliably, and are slower to reflect updates.
This creates a compounding problem for sites with important transactional or evergreen pages that depend on frequent re-crawling to stay current and competitive. Beyond crawl distribution, similar pages targeting overlapping intent compete for the same ranking positions internally.
Search engines consolidate these signals rather than rewarding each page individually, meaning two pages targeting near-identical queries often perform worse combined than one authoritative page targeting both would perform alone.
Topical authority dilution
Search engines evaluate whether a site is a genuinely deep and trustworthy resource within a defined topic space. Expanding into a wide range of loosely related subtopics can erode this signal rather than strengthen it.
A site with 40 tightly interconnected, substantive pieces on a specific topic will consistently outperform one with 400 surface-level articles spread across adjacent themes. The depth and coherence of coverage within a defined area are what build the authority signal that drives durable rankings.
Pursuing breadth at the expense of depth fragments that signal, making it harder for search engines to assign clear expertise to the domain on any individual topic, even the ones the site knows best.
Weak content and behavioral signals
Search engines use behavioral data such as dwell time, return-to-search rates, and click-through rates as quality signals at both the page and domain levels.
When a site publishes high volumes of content that users engage with poorly, those signals accumulate and begin to affect how search engines evaluate the domain as a whole. This creates a negative reinforcement loop that’s difficult to detect and slow to reverse.
Weak pages actively contribute to lower domain-level quality assessments, affecting the performance of pages that would otherwise rank well. More mediocre content compounds. Each low-engagement publish incrementally reduces the baseline trust that search engines extend to the domain’s better work.
The goal of SEO has traditionally been to rank. Increasingly, the more valuable outcome is to be cited or referenced in AI-generated summaries, pulled into knowledge panels, or sourced by other publishers as a primary reference. These two outcomes require fundamentally different content strategies.
LLMs and AI Overviews are selective about which sources they draw from. The selection is weighted toward pages with strong E-E-A-T signals, high specificity, and clear authoritativeness within a defined domain.
A site that has published hundreds of generic articles covering a topic broadly is less likely to be treated as a primary source than a site that has published fewer, more definitive pieces with clear depth and original perspective.
Volume doesn’t increase citation probability — it may actively reduce it by signaling that the domain is a generalist content producer rather than a reliable primary reference.
The long tail is saturated
The accessible long tail that drove content volume strategies for the better part of a decade no longer exists in the same form. Between 2010 and 2020, there were genuinely underserved keyword opportunities across most industries.
Today, in most commercial verticals, every remotely valuable query has multiple established pages competing for it, especially from high-authority domains with years of accumulated signals.
New content entering this environment doesn’t find open space. It enters a war of attrition against incumbents with advantages it can’t easily overcome. The marginal SEO return on a new article targeting a long-tail keyword is a fraction of what it was five years ago.
The economics only justify creation when there’s a genuinely differentiated angle, a proprietary data point, or a perspective that exists on your page that other pages can’t offer. A keyword existing is no longer a sufficient reason to publish.
At scale, these factors turn content growth into diminishing returns rather than compounding gains. The library becomes harder to maintain, harder for search engines to evaluate clearly, and harder to extract meaningful visibility from — regardless of how much is added to it.
The implication is to change what publishing is for.
Volume targets made sense when more pages meant more opportunities. In the current environment, they measure the wrong thing. The more useful question isn’t how much content a team is producing, but how much of what already exists is actively contributing to visibility, and what is quietly working against it.
For most sites, that audit reveals the same pattern. A relatively small number of pages generate the majority of organic traffic. A larger number generates little to none, and a significant portion actively drains crawl allocation, fragments topical authority, or dilutes the behavioral signals that stronger pages depend on.
You need to move from expansion to consolidation. Existing pages that cover overlapping intent are stronger merged than competing. Thin pages that rank for nothing and engage no one are more valuable removed than retained.
The energy going into producing new content at volume is often better spent deepening the pages that already have authority and signal history behind them.
New content earns its place when it:
Addresses something genuinely unaddressed.
Offers a perspective that existing pages can’t.
Targets an intent the site currently lacks.
In practice, this means retiring a few default assumptions:
That publishing for every keyword variation is coverage.
That indexing is the same as performance.
That output volume is a proxy for strategic progress.
None of these were ever true measures of content effectiveness. They were convenient ones.
The replacement for volume isn’t simply better content. It’s a different definition of what content is trying to achieve.
Depth over breadth
Focus coverage on a smaller number of topics and develop them thoroughly. A single piece that addresses a topic with specificity, original perspective, and clear authorial expertise will outperform multiple pieces covering adjacent variations of the same theme.
Depth is what builds authority signals, drives engagement, and increases citation potential. Prioritize what the site can say with the most credibility.
Distribution as a multiplier
Allocate more effort to distribution. Publishing less creates capacity to deliver strong content to the right audiences. Distribution is a core part of SEO performance in a citation-driven environment.
Being citation-worthy
Create content that can serve as a primary source. Focus on clear points of view, verifiable expertise, and specific insights that other pages can’t replicate.
The goal is to be referenced in AI-generated summaries, cited by other publishers, and included in the knowledge systems search engines rely on.
Sites that rely on frequency and broad coverage are being outperformed by sites that are clearly authoritative on a defined topic, consistently useful to a specific audience, and structured in a way that search systems can evaluate with confidence.
Prioritize depth, clarity of expertise, and consistency within a focused topic area. Treat each published page as a long-term asset that requires ongoing maintenance, evaluation, and improvement.
The content factory model is no longer effective. The approach that replaces it requires more effort, stronger editorial standards, and a higher bar for what gets published.
If your paid social campaigns aren’t converting, you may be undervaluing their impact. Your brand’s exposure on social media can influence other parts of your marketing that platform metrics don’t capture.
Here’s how to design and measure a test to understand how paid social influences your other marketing channels, including PPC.
Step 1: Determine your hypothesis
Start with what you want to learn, then define a hypothesis you can realistically evaluate with your data.
For example, this is a common hypothesis for measuring paid search lift from social traffic:
Search lift hypothesis: Increasing spend on social media will increase brand search volume and overall PPC CTRs.
Logic:
Social ads build brand awareness. As more people become familiar with our brand, they will search for it more often when making research and purchase decisions.
As more people are exposed to our brand, they will increasingly click on our PPC ads regardless of their search term (i.e., increasing non-brand and brand CTRs).
People exposed multiple times to our brand will have a higher trust factor in our products, and therefore, our conversion rates will increase.
Measurement:
Impression and click volume for our branded terms.
CTR changes for brand and non-brand terms.
Conversion rate changes for brand and non-brand terms.
Your hypothesis could have a different scope, such as measuring paid and organic lift from social spend or an increase in direct traffic.
The next step is to set up the test parameters. Generally, measuring before and after a change is a mistake, as seasonality or other factors can affect your test results.
The most common test setup is a geographic split. In this test, we’ll increase social spend for only a set of geographies. Then we’ll examine the PPC data for the geographies where we ran the test and compare them with areas where we did not.
As you choose geographies, you’ll want to control for other variables that may affect your test. Here are some common issues that companies have run into and need to control for in their tests and measurements:
You sponsor a sports team, and they’re playing during your test.
If the game is regionally televised, this can dramatically affect your test results.
You’re running TV commercials in only certain regions.
You choose experimental geographies with many out-of-region commuters, such as New York City, and include New Jersey and Connecticut in your control group.
In these instances, grouping a region and its surrounding commuter areas together, and placing other cities with similar characteristics, such as Chicago and Philadelphia, in a different group, can help balance these tests. (Note: in this example, we’re splitting New Jersey in half.)
Seasonal or local events. Large conferences, festivals, or major weather events can affect your data.
Your control and experimental groups should be statistically similar across factors such as income levels, and urban versus rural regions.
As you set up and measure your test, consider your budget. If you increase social spend and expect higher clicks and conversions for your PPC campaigns, ensure you have the budget to capture the increased demand.
Examine your impression share and impression share lost to budget before and after the test to ensure budget limits won’t severely impact your results.
Measurement can go from very simple to extremely complex.
At a simple level, you can compare platform data to see how your data changed. In this case, a Google Ads report shows how pausing social spending and influencer campaigns across all social platforms (TikTok, LinkedIn, Facebook, YouTube, etc.) affects performance.
For this test, pausing social spending yielded mixed results for conversion rates. As brand searches decreased, conversion rates in some regions increased, while in others they fell.
However, what was consistent was a dramatic drop in conversions.
You can get more sophisticated in your testing. Depending on your analytics setup, some companies want to measure touchpoint differences for their conversions. Others will want to measure overlap rates between social and paid search visitors, or examine attribution touchpoints and models.
Before you set up your test, ensure you have the measurement capabilities needed to understand and interpret the results.
As you run various tests, you want to measure the results against your hypothesis. However, it’s useful to list other variables worth evaluating beyond your test criteria.
This is where search consoles, analytics tools, CRM, internal data, and even the paid and organic report can come into play.
In one example, a company was running a test to see whether pausing several advertising channels, from social media to TV ads, would dramatically change its brand search volume. They hypothesized that their brand was so well known in the marketplace that they could cut back on several forms of brand advertising and reallocate that budget to other channels and non-brand advertising.
While the simple paid and organic report in Google Ads won’t tell you the full story about in-store revenue and direct traffic changes, it can serve as a signal to form an overall picture of a very complex test.
They had recently launched a new product line, and that line continued to see a large increase in traffic during the test. However, their most common brand terms saw significant declines from the test. This was a year-over-year comparison across a set of geographies, rather than a period-to-period comparison, to help correct for the increase in holiday traffic that would have occurred during the previous period.
The results were by far the most dramatic I’ve ever seen in this type of test, to the point it was clear other variables had to be in play that could affect the test.
This takes you to the sniff test. Rely on your experience with data to make common sense adjustments. If you look at the data and it just doesn’t seem right, ask yourself whether this makes sense, if it’s a math quirk (common with low data), or if other unforeseen variables are in play.
In this example, no one believed the results should be this dramatic. The company stopped running the test and began an internal evaluation of its organic presence, including Google’s recent updates, changes to AI Overviews, AI engagement, and other factors affecting its web presence beyond its usual marketing channels.
Decide how you will test. The easiest setup is a geographic split.
Make sure you can measure the results.
Launch the tests.
Evaluate the metrics for your hypothesis.
Examine other metrics for insight or additional testing ideas.
For some companies, Facebook and other social channels are their top conversion channels, and these tests won’t be applicable. For others, social media advertising results often look poor when evaluated in isolation.
In these examples, the companies were already running many social media campaigns, so the test was to reduce social media spend. If you don’t run much social media, your test will be to increase your social media spend to see how it affects your data.
I’ve seen a lot of these tests, and the results are highly inconsistent across companies. Many companies will increase their social media spend and see little change in their data. Others will increase their spend and see a nice lift in overall performance. These are tests you need to run yourself, as your results will vary by company.
Running geographic split tests in your social media campaigns and then measuring the results on paid or organic search traffic can give you insights into how to leverage social media campaigns for other marketing channels.
Google announced they are testing a new “conversational search experience to complement how you already search on YouTube.” It is called “Ask YouTube” and it lets you “dive deeper into the topics you’re curious about in a more interactive way,” Dave from YouTube wrote.
What it looks like. Here is a GIF of it in action:
How can I try it. If you want to try it out, you can go to youtube.com/new and try to opt into it.
This experiment is currently available for YouTube Premium members 18+ in the US who opt-in. Google is working on expanding the experiment to non-Premium users in the future.
What it does. Dave from YouTube posted this example:
“If you’re in the experiment, you can try it out by selecting “Ask YouTube” in the search bar. For example, you can ask for help planning a 3-day road trip from San Francisco to Santa Barbara, and you’ll get a structured, step-by-step itinerary instead of a list of videos. The response will bring together a new mix of long-form videos, Shorts, and informative text featuring local tips and must-see stops. You can ask follow-up questions like, “where can I find good coffee?” to explore local spots along your route. We’ll surface videos and relevant video segments, accompanied by their titles and channel details, to make it easy to discover new creators and jump into the most helpful content from your search.”
Why we care. AI search is creeping into every search interface across Google’s properties. YouTube is no exception. Expect more and more AI search experiences in more Google surfaces and expect them to change and adapt over time.
You can find more coverage of this across Techmeme.
Understanding the ins and outs of paid media can seem like an overwhelming process when you’re first entering the field. As AI has rapidly changed ad platforms in recent years, keeping up can feel challenging.
Thankfully, you’re not alone. You’re part of a supportive industry with a wealth of content and knowledge to share. Here are seven tips to help you learn and become a more confident PPC manager.
1. Be curious
Curiosity is foundational to growth in PPC. You’ll learn best by taking initiative to understand ad platforms, how campaigns are structured, and what options are available on the backend. Of course, be careful about tweaking settings you’re not familiar with, but don’t be afraid to dig in on your own.
If you’re part of a team, ask your colleagues why they use a particular setup. If you’re not familiar with a platform and have a team member who frequently uses it, ask if they can walk you through it.
There are countless industry professionals producing content to teach PPC. Whether you learn best from reading, listening to podcasts, or watching videos, you’ll find options that fit your style. Looking up the authors of articles on this site is a great starting point to build a list to follow.
Block out time in your schedule for education. Even setting aside a couple of hours a week helps you gain perspective from others in the industry and keep up with constant platform updates.
The PPC industry has long been known for its welcoming, supportive community. Seek out individuals and organizations who are actively sharing, and don’t be afraid to engage with them on social media. Conferences are also a great way to network with other PPC professionals and sometimes discuss their approaches in a more informal setting.
A brief word of caution: Vet recommendations you see from others against your own experience in ad accounts. Just because a “best practice” worked for one account doesn’t mean it’ll work for every account. Depending on the tactic, you may want to test it as an experiment to measure impact, or compare results before and after.
3. Take industry certifications with a grain of salt
While ad platform certifications can serve as a starting point for demonstrating basic functionality, be cautious about relying on them as the end-all proof of PPC expertise.
Certifications often lean heavily on platform-recommended best practices, which may conflict with tactics that align with a brand’s goals. Academic knowledge can’t match the insight gained from practical, hands-on experience in accounts.
4. Don’t chase what’s new and shiny
While I’d encourage staying aware of ad platform updates and current tactics, I’d discourage implementing a new campaign type or expanding into a new platform just because it’s new. Make sure you have sufficient budget and a clear reason to test.
Additionally, avoid making adjustments without a rationale. If campaigns are performing and driving qualified leads or sales, keeping the status quo may be best.
Basic marketing principles still apply, such as knowing your target audience, addressing their problem with a solution, and presenting a clear call to action. Focus on aligning your channel choices with these goals, and the rest will follow.
As you become more embedded in PPC, you may naturally use industry terms and acronyms such as CTR, CPC, ROAS, and CPA. However, these metrics are often meaningless to stakeholders who aren’t immersed in your world. One of the most vital skills for a paid media professional is translating abstract metrics into language that connects with what stakeholders care about.
For instance, I often default to “conversions,” even though the term can be ambiguous in reports. Referencing the actual action being tracked (such as account open, form fill, or purchase) is more concrete and ties directly to what stakeholders are tasked with driving.
6. Use AI, but don’t neglect the human touch
AI is an inevitable part of a future-forward career, and ignoring it will be detrimental to career development. However, don’t lose the human oversight that sets a seasoned PPC practitioner apart.
When writing ad copy, LLMs can offer a strong starting point and help refine wording. But don’t rely on AI to produce all your copy, as it may pull irrelevant content from your site (or elsewhere), and may not reflect your brand’s voice and perspective. Also, learn where AI can save time on “busy work” tasks, such as reviewing search terms and placements for exclusions, while still reviewing the output for accuracy.
While most ad platforms default to automated campaign setups and encourage a hands-off approach, a standout PPC manager understands the levers they can pull to maintain control when needed. Examples include:
Setting target bids or cost caps.
Excluding irrelevant keywords, placements, and audiences.
Pinning headlines and descriptions in responsive search ads.
Restricting geographic targeting to avoid unwanted locations.
7. Don’t change things for the sake of showing activity
One common temptation for both new and seasoned paid media practitioners is to make changes just to appear busy. The motivation may be valid, as you want to prove to your client or boss that you’re attentive to PPC account management.
However, particularly with campaigns that rely heavily on data to drive automated bidding, too many changes in a short period are often detrimental. Be sure to allow for data significance and enough time before pausing ads and keywords or tweaking bid targets.
If you can show positive performance trends and provide readouts on which campaigns and channels are driving those results, you can validate your decisions to take or not take action when presenting to stakeholders.
Becoming a confident PPC manager requires mastering a blend of technical, interpersonal, and marketing skills. As you build your knowledge, look for opportunities to share what you’re learning with peers. It’s one of the fastest ways to reinforce what you know and keep improving.
Branded search is often treated as predictable and easy to manage. In practice, it isn’t.
PPC teams see rising CPC on brand terms. SEO teams see declining branded CTR, even when rankings hold. These issues are usually investigated separately, with different dashboards, hypotheses, and fixes.
Both signals often stem from changes within a single SERP. What look like two separate problems are, in reality, one shared environment reacting to shifts in competition and visibility.
The issue isn’t a lack of data. Most teams already have basic reports and brand monitoring tools, including PPC and SEO platforms. The problem is how the data is used.
To understand what’s happening in branded search, teams must manually piece signals together. This takes time, doesn’t scale, and delays decisions.
Here’s why that fragmentation is harmful and what to do about it.
What’s actually happening in branded search
Branded search is often described in terms of channels — paid and organic. For users, that distinction doesn’t exist.
A single SERP brings together multiple layers:
PPC ads
Competitor ads or comparison pages
Organic results, including brand-owned pages
Affiliate listings promoting the same brand
Review platforms and aggregators
All of these elements appear at once, within the same decision-making space.
From a SERP analysis perspective, this isn’t a set of isolated placements. It’s a dynamic environment where each element influences the others. A competitor ad above your organic result can reduce CTR. An affiliate listing can compete with your paid campaign. A review page can shift user intent before a click.
In practice, this creates a mismatch.
For users, branded search is a single page. Inside the company, it’s split across workflows and handled by different functions.
PPC focuses on bids and efficiency. SEO focuses on rankings and organic traffic. Affiliate activity is often tracked separately, if at all. Competitor tracking may exist, but usually within a single channel. The result is a fragmented view of what is, in practice, a shared space.
Understanding what’s happening in branded search often requires manual effort. The data is there, but building a complete, up-to-date view of the SERP on a regular basis is time-consuming and hard to scale. That makes it difficult to understand how these elements interact — and even harder to respond to changes as they happen.
What PPC teams see (and often miss)
From a PPC perspective, teams focus on these signals:
Brand CPC starts to rise.
More players appear in the auction.
Branded campaigns become less efficient over time.
At first glance, this suggests increased competition. The typical response is to adjust bids, defend impression share, or refine targeting. All of it makes sense within paid media.
But this is where context changes everything.
What PPC teams don’t always see is who’s driving that competition.
Not every new entrant in the auction is a direct competitor. Often, it’s affiliate activity — partners bidding on branded terms outside agreed-upon rules. Without deeper competitor tracking, these cases can look identical while requiring different actions.
There’s also the organic layer. Changes in SERP structure — more ads, different layouts, stronger third-party rankings — can directly affect paid performance. Even if the campaign setup stays the same, the environment shifts. Without ongoing SERP analysis, these changes are easy to miss.
In many cases, brands aren’t just competing with others — they’re competing with themselves. Over 40% of advertised pages already rank #1 organically (Ahrefs, 2025).
PPC teams rarely see the full page in context. They see auction data, metrics, and reports — but not always how their ads appear alongside organic results, affiliates, and other placements in real time.
But beyond missing context, there’s a more practical limitation.
Ad platform reporting rarely explains what changed. It shows performance shifts — but not how the SERP looked to users, who appeared alongside the ad, or how placements were arranged.
This creates a gap.
Competitor tracking without context doesn’t explain the situation — it only signals change. Without broader SERP-level brand monitoring, PPC teams often optimize on partial visibility, reacting to symptoms while the root cause must be reconstructed manually.
What SEO teams see (and often miss)
From the SEO side, branded search issues tend to surface differently.
The most common signals look like this:
Branded CTR starts to decline.
Rankings remain stable, often still in top positions.
SERP appearance shifts — new elements, richer features, or different page layouts.
On the surface, it looks like an SEO problem. The natural response is to review snippets, adjust metadata, or check for technical or content issues.
But in many cases, performance drops aren’t driven solely by SEO factors.
SEO teams generally know that paid activity, competitors, and affiliates can influence branded search. The challenge isn’t awareness — it’s consistent visibility over time.
To understand what changed, teams need to see how the SERP looked at a specific moment:
Which ads appeared and where.
Whether competitors or affiliates were present.
How organic results were positioned in context.
This isn’t what standard SEO workflows are built for. Teams often have to manually check results, compare snapshots across tools, or rely on incomplete data.
Then there’s the SERP itself. Modern branded SERPs aren’t static. Layout changes, added modules, and mixed result types can significantly affect click behavior.
Without consistent SERP analysis, it’s hard to isolate the cause. As a result, SEO teams may keep optimizing — and see no stable results.
Why PPC and SEO issues are actually connected
At a glance, PPC and SEO issues in branded search may look unrelated — different metrics, dashboards, and teams. But when you look at the SERP as a whole, the connection is hard to ignore.
Studies show this overlap isn’t an edge case. Nearly 38% of websites advertise on keywords where they already rank in the top 10 organically (Ahrefs, 2025). In branded search, the overlap is even higher.
That means both channels operate in the same environment — and compete for the same user attention.
Changes within that environment rarely affect just one side:
Increased ad presence can push organic listings lower or draw clicks away.
Aggressive bidding (from competitors or affiliates) can raise CPC while also reducing organic search visibility.
New entrants in the SERP can affect both paid efficiency and organic CTR simultaneously.
In this context, it’s not unusual for PPC performance to decline while SEO metrics shift in parallel. These aren’t isolated issues — they’re different reflections of the same underlying change. Yet they’re rarely analyzed together.
The real problem isn’t visibility — it’s fragmentation.
Most teams already have access to data. Specialized tools make SERP analysis, competitor tracking, and brand monitoring possible. The limitation isn’t what can be seen, but how it’s used.
PPC and SEO operate in separate systems — different platforms and reporting environments, KPIs, and workflows. To understand what changed in branded search, teams must align manually by comparing reports, checking SERPs, validating assumptions, and sharing findings across functions.
As a result, insights are delayed, alignment lags behind SERP changes, and decisions are made with incomplete or outdated context.
How to improve branded search performance
Most teams don’t miss the signals — a spike in CPC, a drop in CTR, unexpected competitors in the auction. These changes rarely go unnoticed. The challenge comes next: confirming what happened and deciding how to respond.
This is where branded search performance slows. Teams dig through separate reports, trying to reconstruct what the SERP looked like at a specific moment. By the time the picture is clear — if it ever is — the window to react has already passed.
Improving performance here isn’t about adding more data. It’s about changing how it’s collected and used.
With the right setup, SERP analysis becomes continuous instead of manual. Changes in branded search are captured automatically, including competitor and affiliate activity that might otherwise require manual checks, post-fact validation, or go unnoticed.
Tools for branded search monitoring such as Bluepear provide:
Unified look on SERP in a specific moment.
Automated alerts when meaningful changes occur.
Pre-collected, timestamped evidence that removes the need to manually gather screenshots or reconstruct past states.
Instead of spending time collecting screenshots, comparing reports, and reconstructing what happened, the information is already structured.
This shifts the process from reactive to operational. Instead of investigating issues after the fact, teams receive a clear signal or a complete case.
This creates a reliable record of what actually happened:
When a new player entered the SERP.
How placements shifted over time.
Where potential violations or conflicts appeared.
Instead of scattered evidence and manual reconstruction, teams get structured, ready-to-use context.
Reporting becomes simpler. Insights can be shared across PPC, SEO, and affiliate teams without rebuilding context each time, reducing internal alignment time. Most importantly, decisions can be made faster.
With Bluepear, brand monitoring and competitor tracking become continuous. Teams receive structured signals instead of raw fragments and can act without rebuilding the situation from scratch.
To see how Bluepear can improve your workflow, create an account and start your free trial.
Final takeaways
PPC and SEO teams don’t lack data — they interpret different signals from the same SERP. But these signals are connected. They’re shaped by the same changes in the search environment, even if they appear in different reports.
When SERP analysis is fragmented, it’s harder to see the full picture — and even harder to act quickly.
What makes the difference is not more data, but better coordination:
Continuous brand monitoring instead of occasional checks.
Shared visibility across PPC, SEO, and affiliate teams.
A consistent view of the SERP, not separate channel reports.
When branded search is managed holistically, teams don’t just react to performance changes — they understand what drives them and respond with clarity.
To simplify how your team tracks and responds to branded search changes, start using Bluepear to automate monitoring, capture SERP changes, and centralize evidence in one place.
Ginny Marvin didn’t get into PPC because she had a grand plan.
She got into it because she was ready to start again.
After years working in print publishing and ad sales marketing, Marvin found herself at a career pivot point. A startup magazine she had helped launch folded, and she decided it was time to move fully into digital.
That meant going from marketing director to entry-level applicant.
“I don’t know what I’m doing, so I’ll start from the beginning,” she recalled.
That reset eventually led her into search marketing, Search Engine Land, and later Google, where she is now Google Ads Liaison.
In this interview, Marvin looks back at how paid search has changed, what marketers still misunderstand, and why the next phase of search will reward curiosity more than control.
PPC clicked faster than SEO
Marvin started on the SEO side at a small agency.
Then the paid search manager went on holiday.
She took over the campaigns temporarily — and immediately saw the appeal.
Coming from print, where measurement was slow or sometimes impossible, PPC felt almost instant. You could launch, spend, measure and see action quickly.
That speed changed everything.
For Marvin, PPC made the connection between marketing activity and business results much clearer than SEO did at the time.
Google won by moving faster
When Marvin entered the industry, Google wasn’t the only serious search player.
Yahoo was still a major force, and Microsoft was part of the mix. But over time, Google pulled ahead.
Marvin believes the difference was focus.
Google kept improving the product, launching new features and iterating faster than competitors. It became increasingly clear that Google was building around advertiser needs and pushing the industry forward.
Early PPC was painfully manual
Today’s PPC marketers may complain about manual work, but the early days were on another level.
Campaigns were built around huge keyword lists, endless permutations and highly granular structures. Advertisers spent hours creating keyword combinations and negative keyword lists.
It gave marketers a sense of control, but it also forced them to build campaigns around how the platform worked — not necessarily how the business worked.
That, Marvin said, is one of the biggest changes in paid search: campaigns now start more naturally with goals.
Search Engine Land became the industry’s newsroom
When Search Engine Land launched, Marvin was still early in her search career.
But it quickly became the place people went for search news, updates and expert analysis.
What made it valuable wasn’t just the reporting. It was the mix of fast news, contributed columns and practical insight from people doing the work.
For Marvin, Search Engine Land played a major role in professional growth across the industry because it made knowledge easier to share.
The search community has always been different
One thing Marvin repeatedly came back to was the generosity of the search community.
From the early days, practitioners shared what they were testing, what worked, what failed and what others should watch for.
That culture of learning helped define the industry.
It also shaped Marvin’s own career, both as a journalist at Search Engine Land and now in her role at Google.
AI is not as new as people think
Marvin believes one of the biggest misconceptions about AI in search is that it suddenly appeared.
Machine learning has been part of Google Ads for years, powering changes such as close variants, Smart Bidding and automation.
What changed recently was the speed of progress driven by large language models.
AI did not arrive overnight. But LLMs accelerated the shift dramatically.
Consumer behaviour is changing search
For Marvin, the biggest change is not just what Google can do.
It is how people search.
Queries are getting longer and more complex. People are searching through images, voice and multimodal inputs. Search can now understand intent without relying only on typed keywords.
That means advertisers need to think beyond the final conversion moment and understand the full customer journey.
Success still means business outcomes
Marvin does not think the definition of success in search has changed.
It still comes down to business outcomes.
What has changed is marketers’ ability to measure those outcomes and connect campaign activity to business goals.
That makes data, measurement and first-party signals more important than ever.
The next 20 years will reward curiosity
When asked what kind of marketer will succeed in the next phase of search, Marvin pointed to curiosity.
The best advertisers will be those who keep learning, watch how customers behave and adapt before they are forced to.
She compared it to mobile, where consumers moved faster than advertisers did.
The same thing is happening with AI.
PPC marketers say they love change — until it happens
Marvin’s reality check for the industry was simple.
PPC marketers often say they love change, but many resist every major shift when it arrives.
Her advice is to take a longer view.
Many of the changes that feel sudden have actually been building for years. Automation, AI, broader intent matching and full-funnel campaigns have all been moving in this direction for a long time.
Her advice: start experimenting
Marvin’s message is not that every new feature will work immediately.
It is that marketers should not write things off forever because they tested them once months or years ago.
Platforms evolve quickly. Capabilities improve. What failed before may work differently now.
For advertisers still holding tightly to old ways of working, the next phase of search will be harder.
What she is proudest of
Looking back, Marvin said she is proud of the search community itself.
Its willingness to share, learn and support each other has made the industry stronger.
She also sees her role, both at Search Engine Land and Google, as being a resource for marketers.
As she put it, communicating “by marketers, for marketers” has always mattered.
Editor’s note: This research was conducted by Exploding Topics, the trend discovery platform owned by Semrush, and is republished here with permission. Data is drawn from a proprietary survey of 1,009 US consumers. Full methodology appears at the end of this article.
More than three in four consumers have used AI to help with shopping or purchasing decisions in the last six months, according to new research from Exploding Topics.
AI tools like ChatGPT and Google Gemini have been absorbed into weekly shopping routines. The technology has rapidly become a staple of product research and price comparison, for everything from clothing to groceries.
But at the same time, we found significant and widespread discomfort about the next chapter in AI commerce.
The very same people who are eagerly embracing AI to shop often draw the line at empowering AI to spend. “Skepticism” is the prevailing attitude about tools like ChatGPT’s short-lived Instant Checkout, while even something as simple as storing card details with an AI chatbot makes consumers uncomfortable.
Looking ahead, shoppers expect AI to become ever more prominent in their buying habits. But this research highlights some significant barriers that will need to be overcome before that can truly happen.
Grok remains the most highly gendered tool. It’s used by 31.98% of male shoppers, but just 15.16% of women.
Across the board, men were more likely than women to use AI tools for shopping. However, ChatGPT usage was close to equal (78.05% of men vs 77.51% of women).
Evolving AI shopping habits
It is remarkable how quickly AI has embedded itself as a standard shopping companion. Among those who are now using the technology, 39.1% say they use AI for shopping “much more” than they did six months ago.
A further 28.97% of consumers are using AI tools for shopping “a bit more” in the last half-year. Only 6.02% have decreased their usage.
Middle Atlantic residents stand out as the keenest adopters. Almost half (49.04%) are using AI for shopping much more in the past six months, and close to eight in 10 (78.98%) have at least somewhat increased their usage. West North Central is the least enamored with the technology, with over 13% using AI for shopping less frequently than they did previously.
Nationwide, the impact of the technology on purchasing habits is stark. 92.54% of consumers say it is at least possible AI has directly influenced them to buy something they wouldn’t have otherwise purchased.
Almost seven in 10 (68.64%) can definitely remember being directly influenced to make a purchase. That includes 36.89% who say they have been influenced “many times” by AI.
This trend is most pronounced among the highest earners. 61.9% of consumers with a household income of $125,000 or higher have made AI-influenced purchases “many times,” and only 13.19% cannot recall any such purchase.
Why the increased uptake?
Although the speed of AI shopping adoption is startling, the reasons behind it are ultimately no mystery. Quite simply, the majority of people who have tried using AI tools have found that they make product research easier.
37.18% say that AI makes shopping research much easier. A further 40.9% say AI makes it somewhat easier.
For the most part, consumers also trust AI as a shopping tool.
Only around one in five shoppers say that they trust AI completely. But that rises above 60% when also counting those who mostly trust AI as a shopping tool, with some manual fact-checking.
In many ways, this is the expected pattern, given that the question was only put to people who have tried using AI as a shopping tool. Those with the least trust may not have tried it in the first place.
However, it’s quite a sharp contrast from another of our original surveys, assessing attitudes to AI Overviews. In that context, 82% of respondents were at least somewhat skeptical of the outputs, and yet the vast majority continued to rely on them anyway (without routinely checking sources) for the sake of convenience.
When it comes to shopping, users seem to have more genuine faith in AI outputs: They are using it not only for its convenience, but because it generally works well. That could be a sign of general AI improvements in the ~nine months between the surveys, or it may be a sign that commerce is an area where the technology can really excel for consumers.
The typical AI purchase pipeline
So most people are using AI commerce tools, and uptake has only gotten higher in the last six months. But interestingly, there is no clear consensus about how to use AI for shopping.
We know that product research and price comparison is popular. But that doesn’t tell us too much about what a typical AI-assisted purchasing journey actually looks like.
We gave respondents four options:
I use AI as a starting point and then consult other sources
I start on traditional retail websites and then use AI as a supplement
I use AI as my only source and then complete checkout externally
I complete the entire shopping process in AI, from initial research to checkout
There was an almost exactly even split between the first two options. 44.8% start on retail websites and then add in AI, while 44.03% use AI as a starting point before looking externally.
This is notable for retailers, and underlines the paramount importance of Generative Engine Optimization (GEO).
A huge base of potential customers are using AI as a starting point, so it is imperative that your brand gets organically mentioned. And for those starting on your website but then double-checking with AI, brand sentiment could make or break a sale.
The other thing that stands out from this data is that using AI for the entire shopping journey is still a fringe use case. Only 8.99% of users are using AI as their only source before purchasing, and only 2.18% are checking out via AI.
In Part 2, we’ll examine the reasons why. Questions in the second part were put to all respondents, to get a better idea of the current attitudes held by both adopters and non-adopters of AI commerce.
Spot the next “AI commerce” 12 months early. Exploding Topics Pro tracks 11M+ trends with search volume, growth curves, and category filters. Start your 7-day free trial.
Part 2: The AI commerce red line
Instant Checkout: Don’t know it, don’t like it
Regardless of which stage in the shopping process users introduce artificial intelligence, the final step is nearly always external checkout. Given that consumers are clearly keen on using AI as part of the commerce journey, tools that eliminate this point of friction make superficial sense.
That was the idea behind Instant Checkout from ChatGPT; you can do all of your research within the app, and then complete your purchase there as well. In effect, the AI agent completes the transaction on your behalf.
But awareness of new and upcoming tools that allow you to checkout from directly within an AI interface is quite low. 42.83% of people were not at all aware, with a further 23.01% only “vaguely aware.”
Unsurprisingly, those who use AI for shopping weekly or more are most likely to be “very aware” of Instant Checkout and similar tools (63.3%). But that drops to 25.19% among monthly users, and just 11.11% among those who have used AI shopping tools “a few times.”
Having been told about the existence of these tools, the response could best be described as mixed.
From a preset list of options, “skeptical” was chosen most often (41.08%), followed by “suspicious” (33.1%). But respondents could pick more than one answer, and “excited” (31.61%), “happy” (24.33%), and “impressed” (24.03%) were the next most-common answers.
Those who selected to fill in an answer of their own were overwhelmingly negative. Responses included “hunted/preyed upon,” “terrified,” “wary,” and “not interested.”
Crucially, there was significant negativity toward Instant Checkout even among those who are already routinely using AI tools for shopping.
29.82% of the most regular AI shopping users said they were suspicious of tools like Instant Checkout, and 29.59% reported being skeptical. Among monthly users, skepticism was the single most popular attitude (37.04%).
Meanwhile, only 2.22% of the people who aren’t currently using AI to shop reported being excited at the prospect of agents being able to carry out purchase orders.
In fact, the idea of AI purchasing power is actively making non-users less likely to try AI for shopping.
44.89% of AI shopping non-adopters are “much less likely” to try the technology as a result of these new tools. Over half are at least a bit less likely, and only 7.11% are more likely.
On the other hand, the most regular existing AI shoppers anticipate that tools like Instant Checkout will further increase their usage. 72.71% say that the innovations make them at least somewhat more likely to shop with AI more regularly.
Outside of power users and non-users, indifference is more common. 48.89% of monthly users anticipate Instant Checkout (and similar tools) will make no difference to their usage, as do 52.17% of occasional users.
And it seems OpenAI must have reached a similar conclusion. Mere months after launching Instant Checkout, it has rowed back on direct shopping features, doubling down on the discovery side of things.
Distrust of AI companies with payment data
One of the biggest hurdles when it comes to further integrating AI into commerce is that most people don’t feel comfortable trusting chatbots with their card details in order to make direct purchases easier in future.
In total, 51.45% of consumers are at least somewhat uncomfortable at the idea of AI tools storing their card details. Only around 1 in 4 are “very comfortable.”
As well as being the most popular response overall, “very uncomfortable” also cut across age groups to an unexpected degree. More than a third of consumers aged 18-29 said they would be very uncomfortable storing card details with an AI tool, despite being digital payment natives.
Even among the most frequent AI shoppers, barely more than half (50.69%) said they would be “very comfortable” with AI tools storing their card details. That dropped dramatically to 18.52% among monthly AI shoppers, 7.25% among those who use the technology occasionally, and just 0.89% among those who don’t currently use AI to shop at all.
Pacific residents are most likely to trust AI tools with their card details, with 64.48% at least somewhat comfortable, while the Middle Atlantic once again stands out as a distinctly pro-AI region. New England is the most distrustful (58.53% at least somewhat uncomfortable).
Who does AI commerce serve?
Tied in with this discomfort about payment details is the fact that consumers are skeptical of whether they are truly the intended beneficiaries of AI commerce technology.
Only 14.16% of respondents said consumers are the ones being primarily served by AI shopping tools right now.
The most common answer (27.52%) was that these tools are made to serve the interests of AI companies themselves. Brands and advertisers (27.32%) was another popular response.
And even among the most frequent users of AI shopping tools, only 23.85% of consumers believe they are the ones whom the tools are primarily serving. These power users were more likely to say that brands and advertisers are the ones being served.
Among less frequent users, skepticism rises sharply, to the point where just 2.22% of non-users believe AI shopping tools are primarily serving consumers right now.
“The mode amount a consumer would authorize AI to spend autonomously is $0.” — Exploding Topics, 2026 consumer AI commerce survey
Hard spending cap for autonomous AI purchases
Given that some degree of skepticism cuts across multiple demographics, it isn’t too surprising to learn that consumers remain reluctant to empower AI to spend vast sums autonomously.
However, the extent of the reluctance is eye-catching: the mode amount a consumer would authorize AI to spend autonomously is $0.
Specifically, we asked how much consumers would trust AI to spend in the scenario where they were instructing an AI agent to buy something once it became available. This hypothetical aligns closely with the stated use cases of the latest AI commerce innovations, including Google’s AP2 Protocol:
But right now, our survey shows the appetite is simply not there. 31.21% of consumers would not allow any autonomous AI spend at all, 17.45%would cap it at $20, and 20.74% would cap it at $50.
This immediately all but wipes out another of Google’s proposed use cases: the example of instructing an AI to buy concert tickets the moment they go on sale. Assuming most such transactions would exceed $100 total, only 11.71% of consumers would currently be comfortable trusting AI with the purchase.
AI companies even face a hard sell among their regular users. 51.84% of weekly AI shoppers would cap autonomous AI spend at $50 or less, as would 67.41% of monthly shoppers.
Barely more than one in five (20.87%) of the most frequent AI shopping users would be prepared to authorize a spend over $100.
Unsurprisingly, the highest earners are the most likely to trust AI to make bigger purchases. But even then, 68.57% would cap agents at $100 or less: 1/2000th of their annual household income at most.
Agentic commerce is here to stay
The tension at the heart of these results is that despite this reluctance to sanction AI spend, there is widespread belief that AI’s role in commerce will continue to get bigger.
More than half of people (55.83%) think AI will play a bigger role in how they shop in five years’ time. Only 12.37% believe it will play a smaller role.
Even among non-users, almost a third (32.44%) predict that AI will play an at least somewhat bigger role in how they shop in five years’ time. And 74.77% of the most frequent AI shoppers believe the technology will take on an even bigger role in how they make purchases.
A future of expanded AI commerce would come with further questions. For instance, a landscape of ads and sponsored links has the potential to disrupt the quality of AI outputs.
However, most consumers seem satisfied that increased AI shopping features won’t actively impact the quality of responses. In fact, 48.35% believe the rollout of more shopping capabilities and the integration of ads will actually improve the overall standard of AI answers.
Only around one in 10 of the most frequent users predict that ads and shopping features will make AI outputs worse, a finding which AI companies could well interpret as something of a green light to push ahead with this kind of monetization.
The final sure sign that AI commerce will continue to grow is simply that shoppers like it. Even if there is some skepticism about whether consumers are truly the main beneficiaries, 55.83% agree that AI features make shopping at least somewhat better for consumers overall.
Among users and non-users alike, fewer than one in five people think AI has made the shopping experience any worse. That falls below one in 10 among those who have used the technology at all within the last six months.
Though there are disagreements about what direction this burgeoning technology should take next, it looks increasingly clear that AI shopping is here to stay.
If you’re a retailer, a tool like Semrush Enterprise AIO is more important than ever. 77% of your customers are using AI in their commerce journeys, and the visibility and reputation of your brand has the potential to transform your bottom line.
This survey was completed by 1,009 respondents in total. After the general questions about frequency of AI usage and AI shopping usage, non-users were skipped for the remainder of Part 1, before being reintroduced for Part 2 (where the opinions of non-users offered valuable insights).
All respondents were from the US, spanning all regions (there were no respondents from the US Territories). 56.03% of respondents were female, and 43.97% were male.
10 different household income bands were represented, from $0-9,999 up to $200,000+. The median income range was $75,000-$99,999.
On PPC Live The Podcast, I spoke with Peter Bowen, a Google Ads specialist with nearly 20 years of experience and a strong focus on B2B lead generation.
Pete shared two major lessons from his career: always check the basics, and never assume the systems around your ads are working just because the campaigns look fine.
The currency mistake that cost 10 times the budget
Pete Bowen shared an early mistake where a South African client’s account was set up in the UK, defaulting the currency to pounds instead of rand. That simple oversight led to spending roughly 10 times the intended budget, delivering great results at first — but ultimately setting unrealistic expectations and losing the client.
Why checklists protect PPC teams
The takeaway from that mistake was to formalise learning into process. Adding something as simple as a currency check to a setup checklist ensures that once a mistake is made, it doesn’t happen again — turning painful lessons into repeatable safeguards.
The bigger problem: system decay
Beyond setup errors, Pete highlighted a more subtle but common issue he calls “system decay” — where the infrastructure connecting ads, tracking tools, CRMs and sales processes gradually breaks down without anyone noticing.
Why conversion data failures hurt performance
When conversion data stops flowing properly, Google’s algorithms lose the feedback they rely on to optimise. This can lead to reduced spend, poor performance or campaigns that suddenly stop delivering — even if nothing appears wrong inside the platform.
PPC managers need to look beyond the interface
One of the biggest mistakes advertisers make is focusing only on what happens inside Google Ads. Strong performance depends on the entire journey, from click to conversion to revenue, and any break in that chain can undermine results.
What to do when conversion tracking breaks
When tracking fails, the priority is to fix the root issue quickly and, where possible, use data exclusions to prevent bad data from influencing optimisation. Longer term, building monitoring systems that flag issues early is essential to avoid repeat problems.
The danger of optimising for clicks
Pete also pointed to a common but damaging mistake: optimising campaigns for clicks rather than outcomes. Without proper conversion tracking, advertisers can end up driving large volumes of traffic that never turn into leads or sales.
Why Performance Max needs strong tracking
Automation like Performance Max can amplify this issue, as it will follow whatever signals it receives. Without accurate conversion data, it can scale irrelevant traffic quickly, making strong tracking a prerequisite before leaning into automation.
Why bid strategies need guardrails
Google’s bidding systems are powerful but literal — they optimise toward whatever you define as success. That means advertisers need clear goals, reliable data and sensible guardrails, such as CPC limits, to avoid extreme or inefficient outcomes.
Testing AI features carefully
With newer tools like AI Max, the risk isn’t testing too early — it’s testing without a clear definition of success. Metrics like impressions and clicks are not enough; advertisers need to measure impact on qualified leads, sales and revenue.
The problem with “always be testing”
Peter also challenged the idea that everything should be constantly tested. Many accounts simply don’t have enough data to make small tests meaningful, meaning time is often better spent improving fundamentals rather than chasing marginal gains.
The key takeaway
The overarching lesson is straightforward: mistakes are part of the process, but only if they lead to better systems. Every error should result in a checklist, a monitoring process or a safeguard — ensuring it doesn’t happen again.
As ad dollars begin shifting toward ChatGPT, ad tech firms have started working to make that transition as seamless as possible.
What’s happening. Adthena launched a new tool, AdBridge, designed to convert existing Google Ads campaigns into formats ready for ChatGPT advertising. The pitch is simple: don’t rebuild from scratch — repurpose what already works.
The tool analyzes advertisers’ search campaigns to generate keyword lists, negative keywords, and competitive insights that can be directly applied to ChatGPT campaigns. It also surfaces which brands are showing up in specific auctions, how often they appear, and which prompts are triggering those placements — giving marketers more than just a copy-paste approach.
Why we care. Adthena’s Adbridge makes it much easier to shift budget from Google Ads into ChatGPT without rebuilding campaigns from scratch. By repurposing existing keywords, learnings, and competitive insights, brands can test and scale ChatGPT ads faster with less risk. As the platform opens up and inventory grows, tools like this lower the barrier to entry and could accelerate how quickly ChatGPT becomes a serious performance channel.
As Adthena CMO Ashley Fletcher put it, the goal is to get campaigns “ready so they can go straight in,” mirroring the CSV-based workflows advertisers already use across major platforms.
Early testing. The company already held multiple sessions with large enterprise brands testing the tool, signaling early demand from advertisers looking to scale activity in ChatGPT’s still-limited ad ecosystem.
Between the lines. This isn’t just about convenience — it’s about momentum. Advertisers experimenting with ChatGPT ads have faced constraints like low inventory and limited scale. By making it easier to deploy campaigns quickly, Adthena is positioning itself to accelerate adoption as those constraints ease.
Zoom in. AdBridge is part of a broader push from Adthena, including Arlo, an AI assistant that allows advertisers to query performance data and compare results between ChatGPT and search campaigns. Together, they point to a future where managing AI-driven ad channels looks increasingly similar to existing search workflows.
Bottom line. If ChatGPT ads are going to compete for search budgets, the winners may be the tools that make switching feel effortless — and Adthena wants to be first in line.
Microsoft teased new AI reporting features within Bing Webmaster Tools that enhance the AI performance reports and other reports around AI. The new features that were showcased include citation share, grounding query intent, GEO-focused recommendations.
More details. Several shared screenshots of this presentation that was given by Krishna Madhavan from Microsoft at SEO Week today in New York City. Here are some of those slides:
Bing Webmaster Tools just dropped some VERY COOL stuff at #SEOWeek 2026
Not live yet. These new features and reporting do not seem live yet but Microsoft still showed them off.
Why we care. More transparency into how your content is performing within the AI search results is useful. So we all welcome additional reporting from Bing Webmaster Tools.
It is not clear exactly how these reports will work and when they may be live for you and me, but you can read those posts for more details.
If you’re reading this, you’re likely an SEO aficionado like me. I’m a seasoned SEO with 10+ years of agency experience.
Being on the agency side gave me deep SEO expertise, exposure to top industry talent, and experience working with some of the world’s most well-known brands.
I did a bit of everything on the agency side — from technical SEO to content marketing to new business.
Working at an agency is nothing like working in-house. After a long run on the agency side, I moved in-house for the first time. Here are seven things I’ve learned since making the switch.
1. Owning performance changes how SEO is evaluated
On the agency side, when performance drops, you know the drill: a frantic message hits your inbox — traffic is down — and the client needs a report on what’s happening by yesterday.
You then spend the next few hours in the SEO trenches analyzing search trends, tracking ranking changes, and digging through Google Search Console to find your answers. You cross your T’s. Dot your I’s. You beautify that report a bit. And — finally — you fire it off to your client.
After sending the report, you may get a few questions from the client. A little back and forth, but for the most part, your job is done. The fire drill is over. You’ve done everything you can from the agency perspective. On to the next client on your roster.
This situation looks a lot different on the in-house side.
From my new perspective, receiving that agency report is just the beginning. Now, I’m the one on the hook for translating that analysis, figuring out how to socialize it, and turning it into a concrete action plan to turn performance around.
I always knew my clients were under a lot of stress. I figured their bosses were the ones catching the dips and asking difficult questions, leading to that inevitable frantic message in my inbox. But, boy, it hits differently when you’re the one getting asked those difficult questions.
When you’re in-house, you aren’t just reporting on a dip in performance — it feels like you’re defending your entire SEO strategy. The way you frame that data can make or break the projects or the direction you’re taking the program.
It’s a lot of pressure — and it’s different when you’re responsible for the results.
On the agency side, the deliverable is the destination. You spend hours researching, analyzing, and refining a beautiful slide deck. Each slide flows, tells a story, and looks pristine. I mastered this — and did it fast.
Now that I’m in-house, I’ve realized the deliverable isn’t the destination anymore.
It’s all about the execution.
I was lucky enough during my agency days to have one engagement where I was deeply embedded in day-to-day operations. I was doing things like building dev tickets, reviewing Figma designs, and actually pushing CMS updates. I thought I knew exactly what execution looked like.
But executing while in-house is way more challenging than I expected.
In order to execute on an SEO strategy, you have to work through the entire org to bring your vision to life. You need to coordinate with the design team to review Figma designs. You need to align messaging and copy with PMMs. You need to work with project managers to make sure deadlines are being met. You need to work with devs to make sure the technical implementation is correct.
It’s not easy. Sometimes it’s messy. And — quite often — it’s pretty frustrating.
But here’s the truth: once you move from polished decks to pushing changes live, you become 10x the SEO you were before.
3. The shift from agency partner to internal stakeholder
One of the more interesting parts of making the switch to in house, was that suddenly, I became the client. I’m the one on the other end of the video call. I’m the one receiving the strategy docs. I’m the one calling all the shots.
And honestly? It’s been a huge (and super exciting) opportunity to take everything that I’ve learned on the agency side and put it into action.
And I’ve gotten to decide what type of client I want to be.
I had a wide range of clients on the agency side. Some disappeared. Some were demanding and made every call tense. Some pushed impossible deadlines. Some didn’t trust my judgment. Some couldn’t execute the strategy.
You name it — I’ve probably experienced that type of challenging client.
Then I had dream clients — kind, collaborative, and treated me like an equal. Calls felt like catching up with a friend before getting into SEO. They could take a strategy and execute without being demanding or difficult.
That was the client I wanted to be. And that’s the client I strive to be, too.
4. Storytelling matters more than strategy
I’m a technical SEO at heart.
Nothing makes me happier than seeing the indexing rate improve after an XML sitemap refresh. Or seeing a massive improvement to Largest Contentful Paint after implementing Core Web Vitals optimizations. Or even a perfectly executed hreflang optimization to target your key international markets.
Chef’s kiss — it warms my technical SEO heart to see all this work get executed.
The problem? Your execs don’t understand that technical jargon.
That’s where storytelling becomes your best friend. And I’d say it’s almost as important as the execution itself.
Because it doesn’t matter if you do all this SEO work if your bosses can’t understand it. You need to tell a story about what you did, why you did it, and the results. All in a simple, easy-to-understand format — ideally with a pretty visual right next to it.
Let’s take, for example, hreflang optimizations. You realize that hreflang is important. But how do you make it seem important for an exec so that they can understand it?
What I do is pretty simple. I explain the background behind why I’m doing what I’m doing and frame it in simple terms.
Instead of saying that we updated hreflang to target France correctly, I would frame it as improving the search experience for France searchers. I’d then show a SERP screenshot of before the optimizations to show incorrect targeting, and follow it up with an updated screenshot with correct targeting. Lastly, I’d share results — ideally, an increase in CTR, traffic, or conversions.
(Side note: If you’re one of my agency partners reading this, you know I ask for an insane amount of screenshots — but this is exactly why I do it.)
Following this formula allows you to:
Explain why we implemented the optimization (in this case, incorrect targeting in France).
Show what users are seeing in the market.
Demonstrate that this optimization achieved business results.
It’s a simple blueprint that makes it easy for execs to understand the importance of your optimizations. I know it may seem small, but storytelling is one of the secrets to success in in-house life.
In a massive organization, it’s so easy to live on an SEO island. If you’re not collaborating, you can easily find yourself on a beach hanging out with a volleyball named Wilson — just optimizing <title> tags, writing meta descriptions, and optimizing on-page copy for keywords.
But there’s absolutely no way you’re going to get anything meaningful done without the support and assistance from others within your organization.
You need to be a team player. And cross-functional collaboration is important for success.
After years on the agency side, I learned to move fast — really fast. When I went in-house, I tried to keep that pace. I wanted to make changes, test, and see results immediately. I saw documentation as a hurdle, and large cross-functional meetings without progress as a waste of time.
Quickly, I found out that’s not the case. You need the support of those partners in cross-functional meetings to get things done.
It takes time to get to know your cross-functional teams and understand what they’re good at, what their goals are, and — crucially — where they need support. I’ve learned that once you understand the developer’s sprint capacity or a product marketing manager’s roadmap, you can stop just requesting things from them and start partnering with them to get things done.
When you align your SEO goals with their existing priorities, you stop being a line item in their backlog and start becoming a teammate. In-house, having a teammate in engineering or product is the difference between a strategy that sits in a slide deck and one that actually ships.
6. Taking initiative and trusting your judgment
OK, fine, I added a cliché to the list. But in the in-house world, it might be the most important one.
I’ve been given this advice several times throughout my career. If you want to get something done, go get it done. Don’t wait around for permission from your bosses to do something that will have a significant impact. If you wait for permission, you may never get anything done.
That’s why I ask for forgiveness — not permission.
When I started in-house, I knew the team was lean. I knew my bosses had a million things on their plates. And, most importantly, I knew they hired me for a reason: to drive organic growth.
During my first few weeks, I remember asking myself, “Can I launch this content?” “Can I expand into this market?” “Am I allowed to test this tactic?”
And then it hit me: This is exactly why I’m here. They hired me to make these decisions and move the needle, not to add more approval meetings to their calendars.
And if I asked for permission for everything, I would never be able to get anything done.
This is why I trust my instincts when it comes to SEO strategy and execution. I rely on my 10+ years of experience in the SEO game. If I think something is going to drive growth for the business, I don’t just sit around and wait for permission to do something. I execute.
And if something doesn’t turn out exactly how I had planned? That’s when I take the forgiveness route.
I did a lot of high-impact, business-changing work during my agency life. I’ve built the strategies, seen them come to life on a site, and watched them drive results. Driving results and building case studies have always been my favorite part of the job.
However, when you’re sitting agency-side, you’re often the silent partner in those results, not the owner.
Now that I’m in-house, I get to see my projects come to life on the site — and it’s pretty cool.
During my first few months in-house, I knew I wanted to make an impact quickly. I implemented a few of my high-impact, low-effort optimizations — the ones I would typically implement for a new client I had just onboarded.
After reviewing monthly reports, I saw an insane spike in performance that lined up exactly with a significant site update we implemented.
I remember thinking, “Wait, was that us?”
The answer: It sure was.
I then created my first case study and shared the results throughout our organization. And, shockingly (to me, anyway), people were really interested. Within my first three months, I found myself sharing those results at our entire company’s all-hands meeting — something I never expected to happen.
I used to think a massive organization wouldn’t be interested in SEO, but I was wrong. When it comes to moving the needle for the business, everyone cares.
So, yeah, it’s always fun to get SEO results. But it’s a lot cooler when you’re in-house.
Is making the switch worth it? That’s for you to decide
Making the switch from agency to in-house life has been a lot of adjectives for me. Exhausting, challenging, and exciting are some of the first that come to mind.
But the biggest takeaway after one year in-house? I’ve learned a lot.
I hope you can take these seven lessons and apply them to your own journey — whether you’re at an agency or leading an in-house team right now.
The transition isn’t always easy, but for me, seeing the strategy finally turn into reality has made every cross-functional meeting and performance fire drill worth it.
Paid search platforms are getting better at deciding who should see your ads, often without relying on the keywords you choose.
As that shift accelerates, optimization is moving away from query-level control and toward signals like audience data, landing page context, and conversion behavior. Understanding that change is key to knowing what to actually optimize for now.
When keywords gave us control and what comes next
A decade ago, our world was defined by the illusion of control. Every decision we made was anchored in the keyword. Hypersegmentation and single keyword ad groups (SKAGs) ruled the land.
If possible, we’d build a unique landing page for every single keyword in every single ad group. The process was tedious, manual, and we loved it because we felt like we were the ones driving the machine.
Fortunately (or unfortunately, depending on how much you miss spreadsheets and Editor), times have changed. We’ve long speculated about whether Google and Microsoft would finally sunset keywords altogether. That day feels closer than ever.
From Performance Max to the emerging AI Max solutions — and even the shift toward contextual, LLM-driven search like ChatGPT — the industry is moving toward a keywordless reality.
But if we take a step back, we have to admit why the keyword is so vital. It’s a window into clear intent that tells us exactly where a user is in their journey:
The symptom: “Productivity tools for remote teams.”
The consideration: “Asana vs. Trello comparison.”
The decision: “Monday demo.”
If those signals are now handled behind the scenes by a black box, the role of the marketer changes. So what are we actually optimizing for?
Intent is inferred from a complex web of signals that have rendered the individual keyword secondary. To win in 2026, your optimization focus must shift toward three core pillars.
Audience data (the ‘who’ over the ‘what’)
Google’s algorithms now prioritize customer match and first-party data over the query itself. With the full integration of the Data Manager API, the system knows which users in the auction match your closed-won deals.
You no longer bid on the query “cloud security.” You bid on the director of IT (because you’re sharing first-party data) who has a history of researching SOC 2 compliance, even if their current search is as vague as “scaling infrastructure.”
B2B match rates are notoriously stubborn. But this is exactly where you need to evolve your strategy. Move beyond one-to-one list matching and get creative with integration partners to enrich your signals.
Start by clustering individuals by shared pain points, then use on-site experiences to allow them to self-identify. By the time they hit a remarketing list, you aren’t just targeting a “user,” you’re targeting a verified intent state.
Your landing page is a data source. Google’s AI scans your page to understand the nuance of your offering. Creative assets are also important signals and need to complement your targeted themes and keywords, plus your landing page content.
If your landing page clearly articulates a “mid-market manufacturing” use case, the AI will automatically find those users, even if they never type the word “manufacturing.” Your “keyword strategy” is now your content strategy.
You might think looking at Meta is a deviation here, but the parallels are impossible to ignore. Meta’s Andromeda retrieval engine now influences a massive portion of the social auction by using the creative itself as the primary targeting signal.
If both platforms are moving toward a world where your assets (whether it’s a 15-second video or a high-value landing page) are what actually define your audience, you have to ask: How much weight are you giving your creative inputs versus your technical ones?
Historical conversions and pipeline velocity
With journey aware bidding and value-based bidding, the algorithm isn’t just looking for the final click. It’s analyzing the historical sequence of a user’s journey.
Optimization now happens against “high-value need states.” You’re feeding the system data on which mid-funnel behaviors (like a whitepaper download or a webinar sign-up) actually lead to six-figure contracts.
The great intent shift: Query-level vs. user-level
The most significant mental hurdle for digital marketers is the shift from query-level intent to user-level intent.
Feature
Query-level intent (legacy)
User-level intent (2026 and beyond)
Primary driver
The specific words typed.
The user’s historical behavior and context.
Logic
“They are in state X, so they need Y.”
Triggered by a predicted “need state.”
Measurement
CTR and CPC.
Pipeline value and predicted LTV.
Auction entry
Triggered by a keyword match.
Triggered by a predicted “need state”
In the old model, a query like “how to manage payroll” might have been ignored by an enterprise SaaS company as “too informational.” In 2026, the AI knows if that user is a student or a VP of finance at a 5,000-employee firm.
If it’s the latter, the user-level intent is commercial, regardless of the query-level phrasing, assuming you’re providing the right signals (see what I did there?). If you’re advertising on Microsoft Ads, you can leverage LinkedIn’s profile targeting.
Now that AI is handling the matching, your job has evolved from a mechanic to a data architect.
Feed the beast with better data: Your competitive advantage is the quality of your CRM integration. If you feed the AI junk leads, it will efficiently find you more junk. You must optimize for value-based bidding.
Audit your signal health: Are your landing pages optimized for AI readability? Do they have the technical schema and depth of content that allows Google to categorize your “intent bucket” correctly?
Embrace the black box with guardrails: Move away from micromanaging search terms, and start managing brand exclusion lists and negative intent themes.
The future of search isn’t about finding the right words. It’s about being the best answer for the right person at the exact moment their need state evolves.
Keywords were the training wheels. Now, the wheels are off. It’s time to see how fast your data can take you.
The failure modes are structural — dialect defaulting, format contamination, and regulatory hallucination — and they’re amplified in a generative search environment where one synthesized answer replaces 10 blue links.
That distinction is now a visibility constraint. Generative systems resolve ambiguity. When your content doesn’t make its market context explicit, the system defaults to the statistical average — and that’s where otherwise solid content gets misapplied or ignored.
Below is a framework for fixing that problem. It’s designed to make market context explicit — across content, technical signals, and retrieval systems — so AI doesn’t have to guess.
What is cultural SEO?
Cultural SEO goes beyond hreflang and localization. The technical foundation is locale precision — controlling market context across retrieval and generation so an AI system treats your Spanish content as belonging to a specific country, not to “Spanish speakers” in the abstract.
Here’s the framework that works when you operate across Spain and Latin America.
But there’s a prerequisite no framework can substitute for: you can’t optimize for a market you don’t serve.
Cultural SEO isn’t a localization layer you bolt onto a website. It’s the technical expression of a business decision to operate in a market — with real logistics, real customer support, real legal compliance, and real product-market fit.
If you ship from Spain to Mexico with a three-week delivery, process returns in euros, and have no local support channel, a perfect hreflang setup won’t save you. The model might surface your content, but the user will bounce — and the next time the model learns from that signal, you’ll be deprioritized.
Internationalization means speaking the market’s language in every sense: visual trust cues, payment methods, delivery expectations, regulatory compliance, and customer experience.
The four pillars below assume you’ve made that commitment. If you haven’t, start there. Everything else is decoration.
Most international SEO teams think of segmentation as a folder structure: /es-es/, /es-mx/, /es-ar/, but that’s not enough.
In generative search, the question is whether the system recognizes that page as belonging to Mexico — and whether it has enough market-specific signals to prefer it over a generic alternative. If your architecture collapses variants, your visibility collapses with it.
Implement granular hreflang and URL structures
Don’t just use es. Use es-ES for Spain, es-MX for Mexico, es-AR for Argentina, es-CO for Colombia, and es-CL for Chile. Include x-default for users who don’t match any specific locale. Consider ccTLD strategies (.es, .mx, .com.ar) where they make business sense.
ccTLDs remain one of the strongest explicit geographic signals on the open web, and they reduce ambiguity for both search engines and downstream retrieval systems. Google’s documentation on localized pages supports this specificity.
But here’s the caveat. In the first article, I discussed Motoko Hunt‘s concept of geo-legibility and the phenomenon of geo-drift — AI systems misidentifying geography because language alone doesn’t resolve market context.
Simply put, if your Spanish content doesn’t carry explicit country-level signals beyond hreflang, the model has to guess. Guessing, at scale, means defaulting.
Ultimately, hreflang helps with traditional routing, but in AI synthesis, it’s one signal among many — and not necessarily the decisive one.
When a generative system assembles an answer, it weighs semantic relevance, authority, and content-level cues alongside metadata.
If your Spanish content relies on hreflang alone to declare “this is for Mexico,” you’re betting on a single signal in a multi-signal environment. Geographic markers need to live in the content itself and in structured data — not only in HTTP headers.
Don’t canonicalize all locales to a single master URL
When you point es-MX, es-AR, and es-CO pages to one canonical es URL, you’re telling engines there’s only one “real” version — the exact Global Spanish assumption you’re trying to avoid. Each market page should canonicalize to itself.
Avoid IP-based redirects
Google cautions against this. Crawlers may not see all variants. More importantly, AI crawlers don’t carry IP signals the way users do. Offer a visible region selector and let users choose.
Encode market cues in structured data
This is essentially what Hunt calls geo-legibility — encoding geography, compliance, and market boundaries in ways machines can parse:
Use priceCurrency with ISO 4217 codes (EUR, MXN, ARS, COP, and CLP).
Use PostalAddress with explicit addressCountry.
Add areaServed to declare which markets you serve — the machine-readable equivalent of saying “we operate here, not everywhere Spanish is spoken.”
Use sameAs to connect to region-specific knowledge graphs (e.g., link your Mexican entity to Mexican directories and chambers of commerce, not just your global Wikipedia page).
A practical example: if your Mexico page shows prices in MXN, but your structured data still says EUR because it was copied from the Spain template, the model sees a conflict. Conflicts breed uncertainty. Uncertainty breeds generic answers. Generic answers are where Global Spanish lives.
A note on es-419: It can be useful as a catch-all for Latin American Spanish where market-specific pages don’t exist, but it should never substitute for es-MX, es-AR, or es-CO when the content involves legal, financial, or compliance information. Generic means vulnerable.
If your market pages aren’t self-evident to machines, the system will resolve ambiguity for you — and defaults win.
Pillar 2: Transcreation, not translation
Translation converts words. Transcreation converts meaning. The distinction matters because translated templates are easy for models to deduplicate — and deduplication is where localized pages go to die.
If two regional pages are 95% identical, the model will treat them as one. The “default” will win. Localized pages need substantive differences that prove market specificity, including:
Local examples and FAQs: A FAQ about tax deductions should reference SAT in Mexico, AEAT in Spain, and AFIP in Argentina — not all three in a dropdown.
Local legal references: Privacy content should cite GDPR + LOPDGDD for Spain, and LFPDPPP for Mexico, not a generic “applicable data protection laws.”
Native terminology: Zapatillas vs.tenis, ordenador vs.computadora, and cesta vs.carrito. These aren’t synonyms. They’re market identifiers that signal “this content was made here.”
Local pricing and formatting: Not just the currency symbol — the entire numeric convention. Spain uses 1.234,56 € while Mexico uses $1,234.56. Get it wrong, and the content reads as imported.
Local proof: Testimonials, case studies, partnerships, and press coverage from the target region. Not imported. When a model evaluates whether your content is authoritative for Mexico, it looks for Mexican corroboration.
The classic example: McDonald’s “I’m lovin’ it” became “Me encanta” — not a literal translation, but an emotionally equivalent expression. Apple’s iPod Shuffle tagline, “Small talk,” became “Mira quién habla” for Latin American Spanish.
These brands understood that meaning doesn’t translate. It must be rebuilt.
Start with keyword research
Identify which Spanish-speaking markets have the most search volume and business potential for your verticals. Volume alone isn’t enough. Consider market maturity, competitive landscape, and conversion potential. Then bring in native speakers from those specific countries.
This doesn’t mean rigid dialect policing. Context matters — a premium brand in Mexico City might use tú deliberately for intimacy. The test is whether those choices are strategic or inherited from the training data’s statistical average.
What ‘substantive difference’ looks like in practice
Take a returns policy page. Spain (/es-es/devoluciones/) and Mexico (/es-mx/devoluciones/) shouldn’t differ only in currency symbols. At least one section needs to be genuinely market-specific:
Spain: Consumer rights framing under EU regulation, SEUR or Correos as default carrier, Bizum as a familiar local payment entity, and vosotros register.
Mexico: PROFECO consumer authority framing, local paqueterías as shipping context, OXXO as a familiar local payment context (where relevant), and ustedes register.
Both: Distinct FAQs written in the market’s register, addressing questions that actual customers in that country ask.
If the pages are 95% identical after these changes, they’re not differentiated enough. The model will still collapse them.
The feedback loop makes it worse: when a Mexican user lands on “españolized” content and bounces, that rejection signal teaches the model not to retrieve that page for Mexico next time. Poor transcreation doesn’t just lose one visit. It trains the system against you.
This pillar addresses a layer that most traditional SEO doesn’t touch — and it’s where a lot of the Global Spanish problem actually lives.
If you’re building RAG-powered experiences (chatbots, AI assistants, and AI-enhanced customer support) or optimizing content for AI discovery, the question is: What content is eligible to be retrieved and synthesized for a given market?
Without explicit constraints, the model pulls from its statistical average — which, in this case, is “Global Spanish.” The fix requires intervention at the retrieval layer:
Filter sources by locale metadata before generation begins: Don’t let a Mexican user’s query pull from your Spain knowledge base unless you’ve explicitly marked that content as applicable to Mexico.
Prefer user-declared markets over inferred signals: If a user selects “Mexico” in your interface, that should be a hard constraint, not a suggestion.
Use hard constraints in system prompts: “Spanish (Mexico), MXN, SAT, Mexican legal context” — not just “Spanish.” The more specific your retrieval parameters, the less room the model has to improvise.
Think of it as the AI equivalent of telling your customer service team: “If a caller is from Mexico, use the Mexico playbook. Don’t improvise.”
This matters beyond your own properties. Up to 43% of fan-out background searches ran in English even for non-English prompts, Peec AI’s analysis found. This is a structural disadvantage for brands whose authority signals exist only in local-language corpora.
Spanish sessions may still trigger English sub-searches, which changes which sources are eligible for retrieval. If the model’s own retrieval is biased toward English sources, your Spanish content needs to be unambiguously market-specific to compete for selection.
Pillar 4: Market authority through entity reinforcement
LLMs learn from your site and what the web says about you.
This isn’t traditional link building. It’s regional corroboration — building the external signal layer that tells a model where your brand operates and who considers you authoritative:
Local media mentions: A feature in top-tier national business press in your target market carries different geographic weight than a mention in a U.S. or U.K. publication. The model infers where you’re relevant from who talks about you.
Local industry citations: Partnerships with local chambers of commerce, industry associations, and regulatory bodies.
Region-specific knowledge graph reinforcement: Your Google Business Profile, local directory listings, and Wikipedia presence should all consistently reflect which markets you serve.
Local backlink ecosystem: Links from .mx, .es, and .ar domains reinforce geographic authority in ways that generic .com links don’t.
This is how you stop being a Spanish brand and become a Mexican authority — or both, explicitly. The key is intentionality: If you serve both markets, the model needs to see distinct authority signals for each, not a single blended profile.
Wrong product categories, wrong local entities, incorrect eligibility
Click-through and engagement drops
Brand voice
Formality mismatch (too formal in Mexico, too casual in Colombia)
Brand perception damage
Retrieval contamination
Facts or citations sourced from a different locale than the target user
Errors propagated into AI summaries
Cultural Mismatch Error Taxonomy — six error classes for auditing AI-generated content across Hispanic markets.
If you want a quick QA starting point, check three things first: the currency symbol, the regulator name, and the second-person register. Those three alone will catch most critical mismatches.
The regional signal table
For teams working across multiple Hispanic markets, these are the signals that most commonly trigger cultural mismatch in AI outputs:
Signal
Spain (es-ES)
Mexico (es-MX)
Argentina (es-AR)
Colombia (es-CO)
Chile (es-CL)
Second-person
Vosotros/ustedes
Ustedes; tú
Vos/ustedes
Tú/usted varies
Tú/ustedes; local slang
Currency
EUR (€)
MXN ($)
ARS ($)
COP ($)
CLP ($)
Decimal separator
Comma (1.234,56)
Period (1,234.56)
Varies
Varies
Varies
Hreflang
es-ES
es-MX / es-419
es-AR
es-CO
es-CL
Privacy framework
GDPR + LOPDGDD
Federal law (2025 changes)
Habeas Data
National data protection
Updated legislation
Fiscal/commercial ID
NIF / CIF
RFC
CUIT / CUIL
NIT
RUT
Typical LLM default risk
Grammar as “standard,” vocab ignored
Vocab as “standard,” context flattened
Voseo erased or flagged
Ustedeo misidentified
Local markers missed
Regional Signal Comparison — key locale markers across five major Hispanic markets. Note: number formatting can vary by platform; the key is internal consistency within a market experience. Regulatory details evolve; the point is to prevent wrong-jurisdiction defaults in YMYL content.
Where this breaks first: YMYL verticals
Not every industry feels this problem equally. But if you work in any of these verticals, cultural SEO means risk management.
Finance: Regulators, tax logic, product naming, and ID formats. Wrong jurisdiction bleed means your AI-generated content isn’t just unhelpful — it may be noncompliant.
Legal: Rights language, jurisdiction references, and compliance frameworks. An LLM citing GDPR to a Mexican user isn’t being cautious. It’s being wrong.
Healthcare: National agencies, approved terminology, and safety messaging. Drug names, dosage conventions, and regulatory bodies differ across every market.
Ecommerce: Payment methods (Bizum ≠ OXXO), shipping norms, returns, and installment culture. When your market cues conflict, the system classifies you as “not for this market.” And in GEO, classification is destiny.
In these verticals, the cost of Global Spanish is a liability exposure, compliance failure, and E-E-A-T erosion that compounds across every AI-generated interaction.
Making it operational
Frameworks are only useful if they translate into Monday morning actions. Here’s how to operationalize cultural SEO:
Week 1: Baseline audit
Re-run the Article 1 Spain vs. Mexico checks across your top five transactional queries.
Log mismatches (currency/format, jurisdiction, and register). This is your baseline.
Week 2-4: Technical foundation
Fix hreflang, canonicals, and structured data.
Ensure each market page canonicalizes to itself, carries correct priceCurrency and addressCountry, and has areaServed declarations.
Remove any IP-based redirects that might block AI crawlers.
Month 2-3: Content differentiation
Prioritize your highest-traffic market pages for transcreation.
Aim for at least 30% substantive content difference between regional variants — different examples, legal references, and local proof.
Month 3-6: Entity reinforcement
Build market-specific authority signals: local media coverage, directory listings, and partnerships.
Ensure your knowledge graph presence is consistent and market-specific.
Ongoing: QA and governance
Implement dialect stress tests across target markets.
Set up automated monitoring for jurisdiction bleed in any AI-generated or AI-surfaced content.
Establish an escalation path for YMYL content where market context can’t be confirmed.
Two metrics worth tracking from Day 1:
Market mismatch rate: Percentage of outputs with wrong jurisdiction, currency, or register.
Wrong-jurisdiction reference rate: Regulators or laws cited from the wrong country, YMYL pages only.
If you can measure those two consistently, you can prove the framework is working.
A note on what actually matters
Everyone’s talking about markdown formatting, llms.txt files, and structured data for AI. Some of that matters. But before chasing the latest optimization trick, review your:
Documentation.
Help center
Knowledge base.
Product docs.
That’s what LLMs are actually reading and what shapes whether an AI assistant recommends you or your competitor. If an LLM had to explain what your product does in the Mexican market based only on what’s public, would the answer be any good?
If not, you don’t have an AI optimization problem. You have a documentation problem.
The fix? Sit down and write clear, market-specific docs that both humans and machines can understand.
If you want a more structured approach, I’ve put together a cultural SEO checklist for Hispanic markets covering technical signals, content signals, entity signals, retrieval constraints, and QA governance.
Before moving on, run these five prompts through any LLM — once specifying Spain, and once specifying Mexico. The differences in the output should be intentional, not accidental:
“Explain how to request an invoice for an online purchase.”
“What ID number do I need to register as a freelancer?”
“Write a returns policy snippet for a €49.99 / $49.99 product.”
“Customer support reply: delayed delivery (mention dates and currency).”
“Best prepaid mobile plan — budget option.”
If the answers are identical, the model is defaulting. If they differ but cite the wrong jurisdiction, you have a retrieval problem. Either way, now you know where to start.
A word of warning — for us
There’s an irony in this article that I don’t want to skip over.
We’re telling brands to stop treating Spanish as a monolith, build market-specific signals, and respect the difference between Madrid and Mexico City.
Then we go back to our desks and use ChatGPT to do keyword research “in Spanish.” We generate content briefs with tools that have the exact same geo-inference failures we just diagnosed. We run audits with AI assistants that default to the same “Global Spanish” we’re warning our clients about.
If the tools we use every day carry this bias, then every output we produce risks inheriting it — unless we’re actively correcting for it. That means specifying the market context in every prompt.
Don’t trust a “Spanish” keyword list that doesn’t distinguish between markets. Treat your own AI-assisted workflows with the same rigor you’d ask of your clients’ content architectures.
The “Global Spanish” problem is also in your own stack. If you’re not fixing it there first, you’re part of the pattern.
From global content to market-specific systems
The goal is to produce Spanish that is market-true. In 2026, “localized” is a systems milestone: routing, content, entities, retrieval, and QA all have to agree on the same country context — or the model will pick one for you.
If you want a definition of done for cultural SEO, it’s this: Spain and Mexico can ask the same question and get different answers for the right reasons — and your pages are the ones that stay eligible to be cited.
Think about the last time you binged those true crime documentaries. The next time you opened your streaming app, the homepage likely shifted. Investigative series rose to the top. Maybe a notification alerted you when a new series dropped. Promotional emails highlighted only what you hadn’t watched. You didn’t see the data parsing or the decisioning behind it. You just looked forward to enjoying the next title.
That’s the standard. According to the Adobe 2025 AI and digital trends report , 71% of consumers want personalized — or personally relevant — offers and information, and 78% expect seamless experiences across channels. Yet fewer than half of brands consistently deliver.
The issue is structural. When customer data lives in disconnected systems, teams will struggle to align insight, timing, and execution quickly enough to take meaningful action. AI can’t magic the problem away. According to the Adobe 2026 AI and digital trends report, fewer than half of organizations say their data foundation is adequate to support AI at scale.
At the initial stages of the modernization journey, the path to personalization can feel daunting. But progress will be easier than you think when you introduce a foundation for a unified customer experience.
The real barrier to personalization: Disconnected journeys
Most brands have plenty of data. It’s cohesion they lack. Your marketing team likely runs email, web, mobile, paid media, support, and even in-person channels. Each collects important signals, but are they sharing context across channels fast enough to shape the next interaction?
If not, impact is immediate. A customer browses a product online, then receives an email with a different price. Or a subscriber contacts support and has to repeat their story to multiple team members before getting help. Or a loyal customer happily purchases your product—only to see the same ads promoting it in their feed for weeks after.
Even minor bumps along the customer journey chip away at trust. Nearly half of customers say they disengage when promotions feel irrelevant or mistimed.
Delivering a unified customer experience requires continuously updating your understanding of each customer and then immediately sharing that insight across every department and touchpoint.
This can require substantial change. But taking the following steps makes the path ahead more straightforward:
Step 1: Build a unified customer profile
A unified experience starts with a single, living view of the customer.
Instead of keeping separate records for each channel, create a dynamic profile that reflects behavior, preferences, and history across all departments as customer activity happens in real time. Every click, purchase, service interaction, and loyalty update should feed into the same source of truth.
With that information, customer segmentation becomes smarter and messaging becomes more relevant. Customers stop receiving duplicative or contradictory communications. And performance can be more accurately measured across the full lifecycle.
This shift moves your marketing strategy from channel and campaign management to customer-first engagement. With a unified profile in place, teams respond to customers as individuals, not isolated events.
Step 2: Connect insights to activation in real time
Accurate data doesn’t create value on its own. Those behavior signals must trigger action to shape meaningful engagement. Cart abandonment should prompt a quick follow-up (but not too quickly). Product recommendations should reflect recent browsing and past purchases. Irrelevant offers should be removed entirely. Journeys should evolve as preferences change.
Relevance largely depends on timing and second chances don’t come easily. Results from a Cognition Neuroscience Research project show the brain processes digital advertising in less than 400 milliseconds. Customers decide almost instantly whether a message applies to them. If systems can’t recognize context and activate insight within that window, the moment passes — and so does the opportunity to connect.
AI supports this speed at scale. It identifies patterns in customer data, anticipates purchase intent, flags churn risk, and determines next-best actions within milliseconds. Its effectiveness, however, depends on accurate, unified data. Reliable inputs enable relevant outcomes.
Step 3: Scale securely in the cloud
Privacy expectations are rising, and protecting customer data is a top priority. As organizations unify more signals and activate them in real time, governance can’t be layered on later. It has to be built in from the start.
To sustain a unified customer experience at scale, organizations need a modern cloud foundation that allows teams to process and activate data where it lives, reduce latency, limit unnecessary movement, and strengthen security controls.
In the cloud, data ingestion and activation happen faster. Infrastructure grows alongside customer volume. Compliance frameworks are embedded, not bolted on. And technology teams spend less time maintaining custom connections and more time enabling innovation.
Make every interaction count
Personalization succeeds when brands are prepared for the right moment, not just the right message. When your data foundation is unified, activation happens in real time, infrastructure is more secure, and personalization stops feeling experimental. Instead, it becomes operational. And relevance becomes repeatable.
Adobe Experience Platform on Amazon Web Services (AWS) brings these elements together and simplifies execution for your teams. Adobe Experience Platform creates real-time customer profiles that power segmentation, analytics, and journey orchestration across touchpoints. Deployed natively on AWS, it runs on scalable infrastructure designed for speed, resilience, and security—while reducing technical maintenance and complexity.
Read the eBook, Capturing attention in the age of AI, to learn more about howAdobe and AWS provide the holistic view of your customer, which marketers need to deliver personalization, build retention, and increase customer lifetime value.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
Landing that perfect SEO role starts long before the interview. It starts with getting past the digital gatekeepers standing between your resume and an actual human. And those gatekeepers are everywhere: research shows 75% to 98% of large employers use Applicant Tracking Systems (ATS) to screen resumes, and up to 75% of qualified candidates never […]
Search is changing fast, and not just inside Google. People discover brands on YouTube, TikTok, Reddit, Amazon, and now, of course, through AI tools like ChatGPT and Perplexity. If you want to stay valuable (and hireable) as an SEO in 2026, you need two things: That combination is how you go from “I know SEO” […]
The SEO landscape has evolved dramatically over the past decade, with professionals now commanding salaries ranging from $67,000 to $191,000, depending on their expertise and role. What once centered on keyword density and backlink quantity has transformed into a sophisticated discipline requiring technical chops, strategic thinking, and deep understanding of user behavior… and then we […]
You do not need a marketing degree or a fancy title (although each can help) to break into SEO. You need proof that you can actually move the needle. If you are willing to learn, ship real work, and show receipts, you can go from “no experience” to “getting paid to do SEO” in a […]
The SEO industry has evolved dramatically over the past decade, with specialized skills now commanding premium rates and attracting the most exciting projects. Many professionals find themselves at a crossroads, wondering whether to continue as generalists or focus their expertise on a specific area of search engine optimization. Making the transition from generalist to specialist […]
Trying to land an SEO role is not just “submit resume, get interview.” You are up against hundreds of applicants, most of whom can talk about title tags and content briefs. The resume is the filter. If you blow it, you’ll never even make it to a human. Agencies are a different animal. You are […]
SEO is one of those careers people either love or eventually run screaming from. On any given day you are trying to understand what Google just changed, why a client’s traffic tanked, and whether that one dev ticket from March is ever going to get shipped. You are expected to be technical, creative, political, and […]
How to Seamlessly Integrate AI Skills into Your Resume AI is not a side note anymore. It is baked into how companies work, hire, and grow. If you can use AI to work smarter, that belongs on your resume. In a lot of cases it is the difference between getting an interview and getting filtered […]
The competition for top SEO talent has reached unprecedented levels, with companies scrambling to attract professionals who can navigate the ever-changing landscape of search algorithms and AI/LLM visibility. Writing a job description that stands out requires more than listing responsibilities—it demands a strategic approach that speaks directly to qualified candidates while ensuring maximum visibility in […]
In the world of digital marketing, careers in SEO (Search Engine Optimization) and PPC (Pay-Per-Click) advertising often converge and overlap. While the skill sets may differ, the ultimate goal remains the same: getting brands in front of the right audience at the right time. Whether you’re new to the field, looking to switch disciplines, or […]
THE·TEAM operates at the epicenter of sports, music and entertainment, serving talent, brands and properties on a global scale. Our brands and properties division works with iconic brands and rights holders, supporting business growth through all marketing disciplines. We’re a trusted partner to every major league, team and venue, building meaningful connections between brands, properties […]
Job Context The North America Director of Growth Marketing is the single-threaded owner of regional growth outcomes, accountable for end-to-end strategy, budget, and delivery of MQLs, pipeline, and efficient growth across assigned business units. This is a general manager–style leadership role that partners closely with Sales, Product, Finance, and shared services to drive scalable, high-performing […]
Vice President of Paid Media Overview The VP, Paid Media owns a ~$15M P&L spanning three pillars: Paid Media, Programmatic, and Creative. This role reports to the Managing Director, Performance, and is accountable for client outcomes, retention, and quality of work across all three pillars. The VP leads through Associate Directors who manage billable consultants […]
The Opportunity Adobe is seeking a Group Manager, Growth Marketing (Product-Led Growth & Experimentation) to lead and scale a high-impact team responsible for in-product experimentation across Acrobat. This role sits within the Retention and Value Discovery Product Marketing team and is at the center of driving material ARR & engagement impact through rigorous experimentation, targeted messaging, and deep partnership with Product […]
Directive Consulting is the performance marketing agency for SaaS and Tech companies. We use Customer Generation (a marketing methodology developed by us) which focuses on SQLs and Customers instead of traditional metrics like MQLs. We offer Paid Media, SEO/Content, CRO, and Video to our clients by creating comprehensive digital marketing strategies that allow our clients […]