Normal view

Today — 8 April 2026Search Engine Land

Google Ads adds “Results” tab to show impact of recommendations

8 April 2026 at 00:57
How to tell if Google Ads automation helps or hurts your campaigns

Google is giving advertisers new visibility into whether its automated recommendations actually drive performance — a long-standing blind spot in the platform.

What’s happening. A new “Results” tab within Recommendations shows the incremental impact of bidding and budget changes after they’ve been applied, allowing marketers to evaluate outcomes instead of relying on assumptions.

How it works. The feature attributes performance changes to specific recommendations, helping advertisers understand what effect adjustments like budget increases or bid strategy shifts had on results.

Why we care. Marketers can now validate whether recommendations improved performance, making it easier to decide which automated suggestions are worth adopting in the future.

Between the lines. Google has a vested interest in encouraging adoption of its recommendations, so providing performance data could build trust — but it also raises questions about how that impact is measured.

The catch. Advertisers may question whether the reported results are fully objective or skewed toward showing positive outcomes, given Google’s incentives.

What to watch. How detailed and transparent the reporting becomes — and whether advertisers see mixed or negative results alongside wins.

Bottom line. Google is moving from “trust us” to “here’s the proof,” but advertisers will be watching closely to see how impartial that proof really is.

First seen. This update was first spotted by Arpan Banerjee who shared seeing the new tab on LinkedIn.

Google Ads lets marketers reuse AI text rules across campaigns

8 April 2026 at 00:10
Google Ads tactics to drop

Google is giving advertisers more control over how AI generates ad copy, making it easier to scale campaigns without losing brand consistency.

What’s happening. Google Ads is rolling out a beta feature that allows marketers to copy text guidelines from existing campaigns and apply them to new ones, eliminating the need to rewrite brand rules from scratch.

How it works. Advertisers can replicate approved tone, style and messaging rules across campaigns in one click, ensuring AI-generated ads stay aligned with brand standards while reducing setup time.

Why we care. The feature helps teams launch campaigns faster by reusing what already works, while maintaining consistency across large accounts where multiple campaigns run simultaneously.

Between the lines. This shift reflects a growing demand from marketers to “train” AI systems rather than rely on them blindly, effectively turning brand guidelines into reusable inputs for automation.

Bottom line. AI is speeding up ad creation, but control is becoming the real differentiator — and Google is starting to hand more of it back to advertisers.

First spotted. This update was spotted by Paid Media expert Arpan Banerjee when he shared spotting the alert on LinkedIn.

Google: AI ads driving up to 80% sales lift for some brands

7 April 2026 at 23:20
What 23 tests reveal about AI Max performance in Google Ads

Google says its AI-powered advertising tools are starting to deliver meaningful results, including major revenue gains for some retailers, as it experiments with how ads work in AI-driven search.

The big picture. Fears that AI chatbots like ChatGPT would disrupt Google’s core search business haven’t materialized, and instead the company’s ads business continues to grow, suggesting AI may be expanding how people search rather than replacing it.

By the numbers:

  • Alphabet Inc. surpassed $400 billion in revenue in 2025.
  • Q4 ad revenue: $82.28 billion (+13.5% YoY).
  • YouTube ads: $11.38 billion (+~9% YoY).

What’s happened. Google is embedding ads into its AI-powered search experiences, including AI Mode powered by Gemini, while introducing new ad formats designed for conversational queries and tools that allow brands to shape how they appear in AI-generated answers, with a new “business agent” feature enabling companies like Poshmark and Reebok to control how their products are represented.

Driving the results. AI-driven campaigns like Performance Max and AI Max match ads to more detailed and conversational search intent, and Google says queries in AI Mode are often two to three times longer than traditional searches, giving the system more context to connect users with relevant products, as seen with Aritzia, which reported an 80% increase in revenue after adopting AI Max.

How it works. The system scans a retailer’s website and creative assets, interprets user intent from conversational queries, and dynamically matches products and messaging in real time. This is increasingly important given that 15% of daily searches are entirely new (according to Google) and cannot be predicted through traditional keyword targeting.

Why we care. Google is shifting from keyword-based ads to intent-driven, AI-matched advertising, meaning campaigns can reach consumers with far more precision at the moment they’re ready to buy. As search becomes more conversational and unpredictable, advertisers who rely on traditional targeting risk falling behind those using AI-driven formats that automatically adapt to new user behavior.

Zoom in. Google is testing new formats such as “direct offers,” which deliver personalized promotions when users show purchase intent, using Gemini to analyze conversational context and behavior, with brands like E.l.f. Beauty, Chewy and L’Oréal participating in early trials.

Commerce push. Google is also advancing its commerce strategy through a Universal Commerce Protocol developed with Shopify, which allows purchases to happen directly within AI conversations.

Yes, but. Google is not alone in experimenting with ads in AI search, and early results across the industry have been mixed, as Amazon has reportedly seen limited traction from ads in its AI shopping assistant, OpenAI continues to explore monetization models, and Perplexity AI has begun phasing out ads after underwhelming performance.

What they’re saying, Google positions itself as a “matchmaker” rather than a retailer, emphasizing that AI helps deliver more relevant and personalized ads while allowing brands to maintain control over their messaging and build user trust by showing the right product at the right moment.

What’s next. Gooogle says it has no current plans to introduce ads directly into Gemini but will continue testing and expanding advertising within AI Mode, including more personalized offers and AI-driven shopping experiences.

Bottom line. AI is not replacing search but reshaping it, and for Google that shift is making advertising more conversational, more targeted and, in some cases, significantly more profitable.

Dig deeper. Google says its AI-powered ads help some brands lift online sales by 80%.

Sundar Pichai sees Google Search evolving into an ‘agent manager’

7 April 2026 at 23:05
Google Search agent manager

Google Search is evolving beyond links and answers into a system that completes tasks, potentially fundamentally changing how users interact with the web. That’s according to Alphabet CEO Sundar Pichai, speaking on the Cheeky Pint podcast.

Why we care. Google is signaling a move from information retrieval to task execution.

Search becoming agentic. Traditional search behavior is already changing and will continue to, Pichai said.

  • “If I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.”

Pichai also described a future where Google Search acts less like a list of results and more like a system that coordinates actions:

  • “Search would be an agent manager in which you’re doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

AI Mode is already changing queries. Users are already adapting their behavior in Google’s AI-powered search experiences, Pichai said:

  • “But today in AI Mode in Search, people do deep research queries. That doesn’t quite fit the definition of what you’re saying. But people adapted to that. I think people will do long-running tasks.”

Search vs. Gemini overlap. Despite the rise of Gemini, Pichai said Google isn’t replacing Search with a chatbot. Instead, the two will coexist — and diverge (echoing what Liz Reid said last month):

  • “We are doing both Search and Gemini. They will overlap in certain ways. They will profoundly diverge in certain ways. I think it’s good to have both and embrace it.”

The interview. The history and future of AI at Google, with Sundar Pichai

💾

Google CEO says searches will turn into multi-step tasks, with AI coordinating actions across tools instead of returning links and answers.

Google AI Overviews: 90% accurate, yet millions of errors remain: Analysis

7 April 2026 at 22:35
Google AI Overviews accuracy

Google’s AI Overviews answered a standard factual benchmark correctly 91% of the time in February, up from 85% in October, according to a New York Times analysis with AI startup Oumi.

However, Google handles more than 5 trillion searches per year, so that means tens of millions of answers every hour may be wrong.

Why we care. We’ve watched Google shift from linking to sources to summarizing them for more than two years. This report suggests AI Overviews are improving, but still mix correct answers, weak sourcing, and clear errors in ways that can mislead searchers and reshape which publishers get visibility and clicks.

The details. Oumi tested 4,326 Google searches using SimpleQA, a widely used benchmark for measuring factual accuracy in AI systems, the Times reported. It found AI Overviews were accurate 85% of the time with Gemini 2 and 91% after an upgrade to Gemini 3.

  • The bigger problem may be sourcing. Oumi found that more than half of the correct February responses were “ungrounded,” meaning the linked sources didn’t fully support the answer.
  • That makes verification harder. The answer may be right, but the cited pages may not clearly show why.

What changed. Accuracy improved between October and February, but grounding worsened. In October, 37% of correct answers were ungrounded; in February, that rose to 56%.

Examples. The Times highlighted several misses:

  • For a query about when Bob Marley’s home became a museum, Google answered 1987; the correct year was 1986, according to the Times, and the cited sources didn’t support the claim or conflicted.
  • For a query about Yo-Yo Ma and the Classical Music Hall of Fame, Google linked to the organization’s site but still said there was no record of his induction.
  • In another case, Google gave the correct age at Dick Drago’s death but misstated his date of death.

Google’s response: Google disputed the Times analysis, saying the study used a flawed benchmark and didn’t reflect what people actually search. Google spokesperson Ned Adriance told the Times the study had “serious holes.”

  • Google also said AI Overviews use search ranking and safety systems to reduce spam and has long warned that AI responses can contain mistakes.

The report. How Accurate Are Google’s A.I. Overviews? (subscription required)

Yesterday — 7 April 2026Search Engine Land

Google starts showing sponsored ads in the Images tab on mobile search

7 April 2026 at 20:56
In Google Ads automation, everything is a signal in 2026

Google has begun placing sponsored ad units directly inside the Images tab of mobile search results — a new placement that eligible campaigns can access without any changes to existing keyword targeting.

What’s happening. When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled “Sponsored” — consistent with how Google labels ads elsewhere in search results.

How it works. Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.

Why we care. This is a meaningful expansion of Google’s paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts — and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.

The big picture. Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates — more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.

What to watch. Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing — and whether it’s eating into organic image visibility for competitors.

First seen. The placement was spotted by Google Ads Expert – Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.

One in five ChatGPT clicks go to Google: Study

7 April 2026 at 20:51
Traffic funnel few winners

Over 30% of outbound clicks go to just 10 domains, with Google alone taking more than 20%, according to a new Semrush study published today.

ChatGPT also relies less on the live web, triggering search on 34.5% of queries, down from 46% in late 2024.

The big picture. ChatGPT’s growth has plateaued, and its role in how users navigate the web is evolving unevenly.

  • Referral traffic from ChatGPT grew 206%, comparing January 2025 to January 2026.

The details. Most ChatGPT referral traffic still goes to a small set of sites, even as more sites receive some traffic.

  • Google accounts for 21.6% of all ChatGPT referral traffic.
  • The next nine domains bring the top 10 to just over 30% of referrals.
  • Most other sites get a long tail of minimal traffic.
  • The number of domains receiving referrals expanded, peaking at around 260,000 in 2025 before settling near 170,000.

Why we care. Visibility in ChatGPT doesn’t translate evenly into traffic, and you’ll likely see marginal referral impact. The decline in search-triggered queries also limits your chances to earn citations and traffic.

When ChatGPT searches. It defaults to pre-trained knowledge and uses web search in specific cases, including:

  • User requests for sources.
  • Questions about recent events.
  • Situations where the model lacks confidence.

Behavior shift. Most ChatGPT prompts still don’t resemble traditional search queries.

  • Between 65% and 85% of prompts don’t match standard keywords, reflecting more complex, conversational inputs.
  • Meanwhile, engagement is deepening. Queries per session jumped 50% in late 2025.

About the data. Semrush analyzed more than 1 billion lines of U.S. clickstream data from October 2024 to February 2026 across a 200 million-user panel, tracking prompts, referral destinations, and search usage.

The study. ChatGPT traffic analysis: Insights from 17 months of clickstream data

New Google Maps features: Local Guides redesign, AI captions, photo sharing

7 April 2026 at 19:30
Google Maps AI updates

Google is rolling out new Google Maps features that make it easier to contribute photos, reviews, and local insights, while adding Gemini-powered caption suggestions.

Local Guides redesign. Contributor profiles are getting more visibility. Total points now appear more prominently, Local Guide levels are easier to spot, and badge designs have been refreshed.

  • Top contributors will also stand out more in reviews with new gold profile indicators.

AI caption drafts. Google is also introducing AI-generated caption drafts. Gemini analyzes selected images and suggests text you can edit or discard.

  • Caption suggestions are available in English on iOS in the U.S., with Android and broader global expansion planned.

Media sharing. Google Maps now shows recent photos and videos directly in the Contribute tab, making uploads faster.

  • If you enable media access, Google Maps will suggest images from your camera roll that are ready to post with a tap.
  • This feature is now live globally on iOS and Android.

Why we care. Google is making it easier to create and scale fresh local content, which can directly affect rankings and visibility. At the same time, stronger contributor signals may influence which reviews users trust and which businesses win clicks.

💾

New Maps tools surface recent media, suggest camera roll uploads, and flag top reviewers with gold profiles, as Google expands AI captions.

How AI decides what your content means and why it gets you wrong

7 April 2026 at 19:00
How AI decides what your content means and why it gets you wrong

Google once attributed two of Barry Schwartz’s Search Engine Land articles to me — a misclassification at the annotation layer that briefly rewrote authorship in Google’s systems.

For a few days, when you searched for certain Search Engine Land articles Schwartz had written, Google listed me as the author. The articles appeared in my entity’s publication list and were connected to my Knowledge Panel.

What happened illustrates something the SEO industry has almost entirely overlooked: that annotation — not the content itself — is the key to what users see and thus your success.

How Google annotated the page and got the author wrong

Googlebot crawled those pages, found my name prominently displayed below the article (my author bio appeared as the first recognized entity name beneath the content), and the algorithm at the annotation gate added the “Post-It” that classified me as the author with high confidence.

This is the most important point to bear in mind: the bot can misclassify and annotate, and that defines everything the algorithms do downstream (in recruitment, grounding, display, and won). In this case, the issue was authorship, which isn’t going to kill my business or Schwartz’s.

But if that were a product, a price, an attribute, or anything else that matters to the intent of a user search query where your brand should be one of the obvious candidates, when any aspect of content is inaccurately annotated, you’ve lost the “ranking game” before you even started competing.

Annotation is the single most important gate in taking your brand from discover to won, whatever query, intent, or engine you’re optimizing for.

What annotation is and why it isn’t indexing

Indexing (Gate 4) breaks your content into semantic chunks, converts it, and stores it in a proprietary format. Annotation (Gate 5) then labels those chunks with a confidence-driven “Post-It” classification system.

It’s a pragmatic labeler and attaches classifications to each chunk, describing:

  • What that chunk contains factually.
  • In what circumstances it might be useful.
  • The trustworthiness of the information.

Importantly, it’s mostly unopinionated when labeling facts, context, and trustworthiness. Microsoft’s Fabrice Canel confirmed the principle that the bot tags without judging, and that filtering happens at query time.

What does that mean? The bot annotates neutrally at crawl time, classifying your content without knowing what query will eventually trigger retrieval. 

Annotation carries no intent at all. It’s the insight that has completely changed my approach to “crawl and index.”

That clearly shows you that indexing isn’t the ultimate goal. Getting your page indexed is table stakes. Full, correct, and confident annotation is where the action happens: an indexed page that is poorly annotated is invisible to each of the algorithmic trinity.

The annotation system analyzes each chunk using one or more language models, cross-referenced against the web index, the knowledge graph, and the models’ own parametric knowledge. But it analyzes each chunk in the context of the page wrapper

The page-level topic, entity associations, and intent provide the frame for classifying each chunk. If the page-level understanding is confused (unclear topic, ambiguous entity, mixed intent), every chunk annotation inherits that confusion. Even more importantly, it assigns confidence to every piece of information it adds to the “Post-Its.”

The choices happen downstream: each of the algorithmic trinity (LLMs, search engines, and knowledge graphs) uses the annotation to decide whether to absorb your content at recruitment (Gate 6). Each has different criteria, so you need to assess your own content for its “annotatability” in the context of all three.

And a small but telling detail: Back in 2020, Martin Splitt suggested that Google compares your meta description to its own LLM-generated summary of the page. When they match, the system’s confidence in its page-level understanding increases, and that confidence cascades into better annotation scores for every chunk — one of thousands of tiny signals that accumulate.

Annotation is the key midpoint of the 10-gate pipeline, where the scoreboard turns on. Everything before it is infrastructure: “Can the system access and store your content?” Everything after it is competition:

Annotation is where you simply cannot afford to fail

When you consider what happens at the annotation gate and its depth, links and keywords become the wrong lens entirely. They describe how you tried to influence a ranking system, whereas annotation is the mechanism behind how the algorithmic trinity chooses the content that builds its understanding of what you are.

The frame has to shift. You’re educating algorithms. They behave like children, learning from what you consistently, clearly, and coherently put in front of them. With consistent, corroborated information, they build an accurate understanding.

Given inconsistent or ambiguous signals, they learn incorrectly and then confidently repeat those errors over time. Building confidence in the machine’s understanding of you is the most important variable in this work, whether you call it SEO or AAO.

Confiance
“Confiance” (confidence) is the signal that drives how systems understand content. Slide from my SEOCamp Lyon 2017 presentation.

In 2026, every AI assistive engine and agent is that same child, operating at a greater scale and with higher stakes than Google ever had. Educating the algorithms isn’t a metaphor. It’s the operational model for everything that follows.

For a more academic perspective, see: “Annotation Cascading: Hierarchical Model Routing, Topical Authority, and Inter-Page Context Propagation in Large-Scale Web Content Classification.”

5 levels of annotation: 24+ dimensions classifying your content at Gate 5

When mapping the annotation dimensions, I identified 24, organized across five functional categories. After presenting this to Canel, his response was: “Oh, there is definitely more.”

Of course there are. This taxonomy is built through observation first, then naming what consistently appears. The [know/guess] distinctions follow the same logic: test hypotheses, eliminate what doesn’t hold up, and keep what remains.

The five functional categories form the foundation of the model. They are simple by design — once you understand the categories, the dimensions follow naturally. There are likely additional dimensions beyond those mapped here.

What follows is the taxonomy: the categories are directionally sound (as confirmed by Canel), while the specific dimension assignments reflect observed behavior and remain incomplete.

Level 1: Gatekeepers (eliminate)

  • Temporal scope, geographic scope, language, and entity resolution. Binary: pass or fail. 
  • If your content fails a gatekeeper (wrong language, wrong geography, or ambiguous entity), it is eliminated from that query’s candidate pool instantly. The other dimensions don’t come into play.

Level 2: Core identity (define)

  • Entities, attributes, relationships, sentiment. 
  • This is where the system decides what your content means:
    • Who is being discussed.
    • What facts are stated.
    • How entities relate.
    • What the tone is. 
  • Without clear core identity annotations, a chunk carries no semantic weight in any downstream gate.

Level 3: Selection filters (route) 

  • Intent category, expertise level, claim structure, and actionability. 
  • These determine which competition pool your content enters.
    • Is this informational or transactional? 
    • Beginner or expert? 
  • Wrong pool placement means competing against content that is a better match for the query, and you’ve lost before recruitment or ranking begins.

Level 4: Confidence multipliers (rank)

  • Verifiability, provenance, corroboration count, specificity, evidence type, controversy level, and consensus alignment. These scale your ranking within the pool. 
  • This is where validated, corroborated, and specific content outranks accurate but unvalidated content. 
  • The multipliers explain why a well-sourced third-party article about you often outperforms your own claims: provenance and corroboration scores are higher.
  • Confidence has a multiplier effect on everything else and is the most powerful of all signals. Full stop.

Level 5: Extraction quality (deploy)

  • Sufficiency, dependency, standalone score, entity salience, and entity role. These determine how your content appears in the final output. 
  • Is this chunk a complete answer, or does it need context? Is your entity the subject, the authority cited, or a passing mention? 
  • Extraction quality determines whether AI quotes you, summarizes you, or ignores you.
Five levels of annotation

Across all five levels, a confidence score is attached to every individual annotation. Not just what the system thinks your content means, but how certain it is.

Clarity drives confidence. Ambiguity kills it.

Canel also confirmed additional dimensions I had not initially mapped: audience suitability, ingestion fidelity, and freshness delta. These sit across the existing categories rather than forming a sixth level.

In 2022, Splitt named three annotation behaviors in a Duda webinar that map directly onto the five-level model. The centerpiece annotation is Level 2 in direct operation: 

  • “We have a thing called the centerpiece annotation,” Splitt confirmed, a classification that identifies which content on the page is the primary subject and routes everything else — supplementary, peripheral, and boilerplate — relative to it. 
  • “There’s a few other annotations” of this type, he noted. 

Annotation runs before recruitment, which means a chunk classified as non-centerpiece carries that verdict into every gate that follows. Boilerplate detection is Level 3: content that appears consistently across pages — headers, footers, navigation, and repeated blocks — enters a different competition pool based on its structural role alone. 

  • “We figure out what looks like boilerplate and then that gets weighted differently,” Splitt said 

Off-topic routing closes the picture. A page classified around a primary topic annotates every chunk relative to that centerpiece, and content peripheral to the primary topic starts its own competition pool at a disadvantage before Recruitment begins. 

Splitt’s example: a page with 10,000 words on dog food and a thousand on bikes is “probably not good content for bikes.” The system isn’t ignoring the bike content. It’s annotating it as peripheral, and that annotation is the routing decision.

Get the newsletter search marketers rely on.


The multiplicative destruction effect: When one near-zero kills everything

In Sydney in 2019, I was at a conference with Gary Illyes and Brent Payne. Illyes explained that Google’s quality assessment across annotation dimensions was multiplicative, not additive. 

Illyes asked us not to film, so I grabbed a beer mat and noted a simple calculation: if you score 0.9 across each of 10 dimensions, 0.9 to the power of 10 is 0.35. You survive at 35% of your original signal. If you score 0.8 across 10 dimensions, you survive at 11%. If one dimension scores close to zero, the multiplication produces a result close to zero, regardless of how well you score on every other dimension.

Payne’s phrasing of the practical implication was better than mine: “Better to be a straight C student than three As and an F.”

The beer mat went into my bag. The principle became central to everything I’ve built since.

The multiplicative destruction effect

The multiplicative destruction effect has a direct consequence for annotation strategy: the C-student principle is your guide. 

  • A brand with consistently adequate signals across all 24+ dimensions outperforms a brand with brilliant signals on most dimensions and a near-zero on one. The near-zero cascades. 
  • A gatekeeper failure (Level 1) eliminates the content entirely. 
  • A core identity failure (Level 2) misclassifies it so badly that high confidence multipliers at Level 4 are applied to the wrong entity. 
  • An extraction quality failure (Level 5) produces a chunk that the system can retrieve but can’t deploy usefully. The failure doesn’t have to be dramatic to be fatal.

At the annotation stage, misclassification, low confidence, or near-zero on one dimension will kill your content and take it out of the race.

Nathan Chalmers, who works at Bing on quality, told me something that puts this in a different light entirely. Bing’s internal quality algorithm, the one making these multiplicative assessments across annotation dimensions, is literally called Darwin

Natural selection is the explicit model: content with near-zero on any fitness dimension is selected against. The annotations are the fitness test. The multiplicative destruction effect is the selection mechanism.

How annotation routes content to specialist language models

The system doesn’t use one giant language model to classify all content. It routes content to specialized small language models (SLMs): domain-specific models that are cheaper, faster, and paradoxically more accurate than general LLMs for niche content. 

A medical SLM classifies medical content better than GPT-4 would, because it has been trained specifically on medical literature and knows the entities, the relationships, the standard claims, and the red flags in that domain.

What follows is my model of how the routing works, reconstructed from observable behavior and confirmed principles. The existence of specialist models is confirmed. The specific cascade mechanism is my reconstruction.

The routing follows what I call the annotation cascade. The choice of SLM cascades like this:

  • Site level (What kind of site is this?)
  • Refined by category level (What section?)
  • Refined by page level (what specific topic?)
  • Applied at chunk level (What does this paragraph claim?)

Each level narrows the SLM selection, and each level either confirms or overrides the routing from above. This maps directly to the wrapper hierarchy from the fourth piece: the site wrapper, category wrapper, and page wrapper each provide context that influences which specialist model the system selects.

How annotation routes content to SLMs

The system deploys three types of SLM simultaneously for each topic. This is my model, derived from the behavior I have observed: annotation errors cluster into patterns that suggest three distinct classification axes. 

  • The subject SLM classifies by subject matter — what is this about? — routing content into the right topical domain. 
  • The entity SLM resolves entities and assesses centrality and authority: who are the key players, is this entity the subject, an authority cited, or a passing mention? 
  • The concept SLM maps claims to established concepts and evaluates novelty, checking whether what the content asserts aligns with consensus or contradicts it.

When all three return high confidence on the same entity for the same content, annotation cost is minimal, and the confidence score is very high. When they disagree (i.e., the subject SLM says “marketing,” but the entity SLM can’t resolve the entity, and the concept SLM flags the claims as novel), confidence drops, and the system falls back to a more general, less accurate model.

The key insight? LLM annotation is the failure mode. The system wants to use a specialist. It defaults to a generalist only when it can’t route to a specialist. Generalist annotation produces lower confidence across all dimensions. 

The practical implication 

Content that’s category-clear within its first 100 words, uses standard industry terminology, follows structural conventions for its content type, and references well-known entities in its domain triggers SLM routing. 

Content that’s topically ambiguous or terminologically creative gets the generalist. Lower confidence propagates through every downstream gate.

Now, this may not be the exact way the SLMs are applied as a triad (and it might not even be a trio). However, two things strike me:

  • Observed outputs act that way.
  • If it doesn’t function this way, it would be.

First-impression persistence: Why the initial annotation is the hardest to correct

Here is something I’ve observed over years of tracking annotation behavior. It aligns with a principle Canel confirmed explicitly for URL status changes (404s and 301 redirects): the system’s initial classification tends to stick.

When the bot first crawls a page, it selects an SLM, runs the annotation, assigns confidence scores, and saves the classification. The next time it crawls the same page, it logically starts with the previously assigned model and annotations. I call this first-impression persistence. 

The initial annotation is the baseline against which all subsequent signals are measured. The system doesn’t re-evaluate from scratch. It checks whether the new crawl is consistent with the existing classification, and if it is, the classification is reinforced.

Canel confirmed a related mechanism: when a URL returns a 404 or is redirected with a 301, the system allows a grace period (very roughly a week for a page, and between one and three months for content, in my observation) during which it assumes the change might revert. After the grace period, the new state becomes persistent. I believe the same principle applies to content classification: a window of fluidity after first publication, then crystallization.

I have direct evidence for the correction side from the evolution of my own terminologies. When I first described the algorithmic trinity, I used the phrase “knowledge graphs, large language models, and web index.” Google, ChatGPT, and Perplexity all picked up on the new term and defined it correctly.

A month later, I changed the last one to “search engine” because it occurred to me that the web index is what all three systems feed off, not just the search system itself. At the point of correction, I had published roughly 10 articles using the original terminology. 

I went back and invested the time to change every single one, updating every reference, leaving zero traces. A month later, AI assistive engines were consistently using “search engine” in place of “web index.”

The lesson is that change is possible, but you need to be thorough: any residual contradictory signal (one old article, one unchanged social post, and one cached version) maintains inertia proportionally. Thoroughness is the unlock, rather than time.

First-impression persistence

A rebrand, career pivot, or repositioning is the practical example. You can change the AI model’s understanding and representation of your corporate or personal brand, but it requires thoroughly and consistently pivoting your digital footprint to the new reality.

In my experience, “on a sixpence” within a week. I’ve done this with my podcast several times. Facebook achieved the ultimate rebrand from an algorithmic perspective when it changed its name to Meta.

The practical implication

Get your annotation right before you publish. The first crawl sets the baseline. A page published prematurely (with an unclear topic or ambiguous entity signals) crystallizes into a low-confidence annotation, and changing it later requires significantly more effort than getting it right the first time.

Annotation-time grounding: The bot cross-references three sources while classifying your content

The system doesn’t annotate in a vacuum. When the bot classifies your content at Gate 5, it cross-references against at least three sources simultaneously. This is my model of the mechanism. The observable effect — that annotation confidence correlates with entity presence across multiple systems — is confirmed from our tracking data.

The bot carries prioritized access to the web index during crawling, checking your content against what it already knows: 

  • Who links to you.
  • What context those links provide.
  • How your claims relate to claims on other pages. 

Against the knowledge graph, it checks annotated entities during classification — an entity already in the graph with high confidence means annotation inherits that confidence, while absence starts from a much lower baseline. 

The SLM’s own parametric knowledge provides the third cross-reference: each SLM compares encountered claims against its training data, granting higher confidence to claims that align, flagging contradictions, and giving lower confidence to novel claims until corroboration accumulates.

This means annotation quality isn’t just about how well your content is written. It’s about how well your entity is already represented across all three of the algorithmic trinity. An entity with strong knowledge graph presence, authoritative web index links, and consistent SLM-domain representation gets higher annotation confidence on new content automatically. 

The flywheel: better presence leads to better annotation, which leads to better recruitment, which strengthens presence, and which improves future annotation.

Once again, better to have an average presence in all three than to have a dominant presence in two and no presence in one.

The annotation flywheel

And this is why knowledge graph optimization (what I’ve been advocating for over a decade) isn’t separate from content optimization. They are the same pipeline. Your knowledge graph presence directly improves how accurately, verbosely, and confidently the system annotates every new piece of content you publish.

If you’re thinking “Knowledge graph? That’s just Google,” think again.

In November 2025, Andrea Volpini intercepted ChatGPT’s internal data streams and found an operational entity layer running beneath every conversation: structured entity resolution connected to what amounts to a product graph mirroring Google Shopping feeds. 

OpenAI is building its own knowledge graph inside the LLM. My bet is that they will externalize it for several reasons: a knowledge graph in an LLM doesn’t scale, an LLM will self-confirm, so the value is limited, a standalone knowledge graph can be easily updated in real time without retraining the model, and it’s only useful at scale when it stays current.

The algorithmic trinity isn’t a Google phenomenon. It’s the architectural pattern every AI assistive engine and agent converges on, because you can’t generate reliable recommendations without a concept graph, structured entity data, and up-to-date search results to ground them.

Why Google and Bing annotate differently from engines that rent their index

Google and Bing own their crawling infrastructure, indexes, and knowledge graphs. They can afford grace periods, schedule rechecks, and maintain temporal state for URLs and entities over months.

OpenAI, Perplexity, and every engine that rents index access from Google or Bing operate on a fundamentally different model. They have two speeds: 

  • A slow Boolean gate (Does this content exist in the index I have access to?)
  • A fast display layer (What does the content say right now when I fetch it for grounding?)

The Boolean gate inherits Google’s and Bing’s annotations. Whether your content appears at all depends on whether it was recruited from the index those engines draw from, and that recruitment depends on annotation and selection decisions made by the algorithmic trinity. But what these engines show when they cite you is fetched in real time.

The practical implication

For Google and Bing, you’re optimizing for annotation quality with the benefit of grace periods and gradual reclassification. For engines that don’t own their index, the Boolean presence is inherited from the rented index and is slow to change, but the surface-level display changes every time they re-fetch.

That means what you are seeing in the results is not a direct measure of your annotation quality. It’s a snapshot of your page at the moment of fetch, and those two things may have nothing to do with each other.

How to optimize for annotation quality: The six practical principles

The SEO industry has spent two decades optimizing for search and assistive results — what happens after the system has already decided what your content means. We should be optimizing for annotation. 

If the annotation is wrong, everything downstream suffers. When the annotation is accurate, verbose, and confident, your content has a significant advantage in recruitment, grounding, display, and, ultimately, won.

1. Trigger SLM routing

Make your topic category obvious within the first 100 words. Use standard industry terminology. Follow structural conventions. Reference well-known entities. The goal: specialist model, not generalist.

2. Write for all three SLMs

Clear signals for subject (what is this about?), entity (who is the authority?), and concept (what established ideas does this connect to?). Ambiguity on any axis reduces confidence.

3. Get it right before publishing

First-impression persistence means the initial annotation is the hardest to change. Publish only when topic, entity signals, and claims are unambiguous.

4. Build the flywheel

Knowledge graph presence, web index centrality, LLM parameter strengthening, and correct SLM-domain representation all feed annotation confidence for new content. Invest in entity foundation, and every future piece benefits from inherited credibility.

5. Eliminate noise when correcting

Change every reference. Leave zero contradictory signals. Noise maintains inertia proportionally.

6. Audit for annotation, not just indexing

A page can be indexed and still misannotated. If the AI response is wrong about you, the problem is almost certainly at Gate 5, not Gate 8.

How to optimize for annotation quality

Annotation is the gate where most brands silently lose. The SEO industry doesn’t yet have a vocabulary for it. That needs to change, because the gap between brands that get annotation right and brands that don’t is the gap between consistent AI visibility and permanent algorithmic obscurity.

Why annotation matters so much and why it should be your main focus

You’ve done everything within your power to create the best possible content that maps to intent of your ideal customer profile, you have methodically optimized your digital footprint, your data feeds every entry mode simultaneously: pull, push discovery, push data, MCP, and ambient, so they are all drawing from the same clean, consistent source

So, content about your brand has passed through the DSCRI infrastructure phase, survived the rendering and conversion fidelity boundaries, and arrived in the index (Gate 4) intact. Phew!

Now it gets classified. Annotation is the last moment in the pipeline where you have the field to yourself. Every decision in DSCRI was absolute: you vs. the machine, with no competitor in the frame. 

Annotation is still absolute. The system classifies your content based on your signals alone, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated.

But this is the last time you aren’t competing. From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in the competitive race you have to win.

That means: 

  • Get annotation right, and you start ahead, with confidence that compounds through every downstream gate in RGDW. 
  • Get it wrong, and the multiplicative destruction effect does its work — a near-zero on one annotation dimension cascades through recruitment, grounding, display, and won. No amount of excellent content, structural signals, or entry-mode advantage recovers it.

Warning: First-impression persistence (remember, the first time you are annotated is the baseline) means you don’t get a clean retry. Changing the baseline requires thoroughness, time, and more effort than getting it right on the first crawl.

Annotation isn’t the gate that most brands focus on. It’s the gate where most brands silently lose.

This is the eighth piece in my AI authority series. 

5 priorities for lead gen in AI-driven advertising

7 April 2026 at 18:00
5 priorities for lead gen in AI-driven advertising

Many of today’s PPC tools were designed to be easily accessible to ecommerce. That doesn’t mean lead gen can’t take advantage of them, but it does mean more intentional application is required.

Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply — but not always in the same way.

Here are the priorities that matter most for succeeding with lead gen using AI.

Disclosure: I’m a Microsoft employee. While this guidance is platform-agnostic, I’ll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.

1. Fix your conversion data first

This is the single most important thing you can do as AI becomes more embedded in media buying.

Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, it’s reasonable to ask whether your data is still telling an accurate story.

Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.

In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:

  • Confirm conversions are firing consistently.
  • Regularly review conversion goal diagnostics.
  • Validate that lead status updates and downstream signals are actually flowing back.

If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.

Dig deeper: How to make automation work for lead gen PPC

2. Make landing pages easy to ingest and easy to understand

Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.

Your landing pages should make it clear:

  • What action you want the user to take.
  • What happens after action is taken.
  • Which conversions matter most.

Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.

Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.

A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.

You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, you’re in a good place. If it doesn’t, that’s a signal to refine your content.

Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

3. Budget across the entire funnel

Lead gen has always struggled with long conversion cycles. That challenge doesn’t go away, and in some ways, it becomes more pronounced.

AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.

That means:

  • Budgeting intentionally across awareness, consideration, and conversion.
  • Applying the right metrics at each stage.
  • Looking beyond traffic as the primary success indicator.

In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Get the newsletter search marketers rely on.


4. Clean up your feeds and map data

You may not think you have a “feed” in your lead gen setup, but that absence can put you at a disadvantage.

Feeds help AI systems understand your business structure, services, and site architecture. Even if you don’t have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.

Clean up your feeds and map data
Example of a feed for lead gen

Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.

On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.

Account for potential AI-driven inflation in reporting, whether you’re looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.

5. Pressure-test your creative for clarity

Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.

If your value proposition requires three headlines, or a headline plus a description, to make sense, that’s a risk.

Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:

  • What you do
  • Who you help
  • Why it matters

If that clarity isn’t there, AI-driven placements can quickly become confusing.

Dig deeper: Why creative, not bidding, is limiting PPC performance

The fundamentals that still move the needle

Lead gen today doesn’t need to be complicated.

Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.

The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.

If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business — and that’s where sustainable performance comes from.

The Mad Men era of SEO: Why AI is shifting search to persuasion

7 April 2026 at 17:00
The Mad Men era of SEO- Why AI is shifting search to persuasion

For most people, “Mad Men” means the TV show. But the phrase points to something more specific: Madison Avenue in the 1950s and ‘60s, when agencies grew brands through persuasion, positioning, and earned trust in a world of scarce media channels and powerful gatekeepers. If you wanted attention, you bought your way in, then made your product the obvious choice.

When the internet arrived and Google made the chaos navigable, an entire industry was built on getting brands found. Search and SEO became one of the most commercially valuable disciplines in marketing.

That model isn’t disappearing. But something new is taking shape on top of it — and most of the industry is still using the wrong language to describe what’s happening.

AI is exposing everything SEO has neglected. Brands that win recommendations from AI systems won’t do so by publishing more content. They’ll win through positioning, persuasion, and corroborated proof.

In other words, they’ll win the way Madison Avenue always did.

SEO was never really about content

One of the strangest things about the current industry conversation is how many people talk as if the job of SEO is to create content. It isn’t. Not for most businesses.

If you’re a publisher, content is the product. Traffic is the commercial engine. But for most brands, content never did what people thought.

Early on, people wrote content for customers, and it worked. Then it changed. Content became a keyword vehicle. “Get people to our site” replaced good marketing comms.

Traffic became a proxy for exposure. It worked because search rewarded retrieval: type a query, get a page, get a click. All you needed to sell that model was the belief that any traffic was good traffic. That traffic somehow led to revenue that your agency could keep delivering.

That model is now under serious pressure. 

Google and ChatGPT are increasingly taking the click. Every serious large language model is trying to satisfy informational intent before the user reaches the source. They aren’t trying to be better search engines. They’re trying to make search engines unnecessary — and that’s the entire point.

There’s too much information on the web. People don’t want to open 10 tabs and read five near-identical blog posts to find a basic answer. They want the answer. The AI systems exist precisely to give it to them.

So if informational retrieval gets absorbed into the interface, what remains? Marketing. That’s the part many SEOs are still not fully grappling with.

Dig deeper: The three AI research modes redefining search – and why brand wins

From place to preference

The cleanest way to understand this shift is through the “4 Ps” of marketing: product, price, place, and promotion.

Traditional SEO has been, almost entirely, a place discipline. It’s been about getting your products, services, or information onto the digital shelf when people go looking.

Keyword rankings are shelf position. Paid search is just a more expensive version of the same principle. In commercial search, you pay for premium placement in a digital aisle.

That still matters enormously.

Buyer-intent search remains valuable. Google hasn’t solved its commercial transition to a fully AI-led interface, and won’t overnight. Search is too important to Google’s revenue to disappear fast. But another layer is emerging above it, and this is the layer that most agencies aren’t yet equipped to compete on.

As AI systems become the first interaction point for more users, the game shifts from being present to being preferred.

Users don’t just search. They ask. They describe a problem. They want the best CRM for a mid-market SaaS company, the best estate agent in their area, the best sandwich shop near the office. And the system responds with recommendations.

If classic SEO was about rankings, the next phase is about recommendations. If classic SEO was about digital placement, the next phase is about shaping preference. And recommendation, in practice, is advertising.

Not a display banner. Not a 30-second TV spot. But advertising in the oldest and most commercially powerful sense: influencing the choice someone makes before they’ve even consciously made it.

An AI-generated recommendation is an invisible ad unit. It doesn’t bill by impression.

Why AI recommendations hit differently

When an LLM recommends a brand, it can’t know with certainty what will work best. So it infers. It weighs signals: past success, prominence, reviews, case studies, corroborating sources, and repeated associations between a brand and a specific type of problem.

Humans do something almost identical. 

Where performance is clearly bounded, we can identify a winner. We know who won the Oscar. We know which film topped the box office.

But when performance isn’t obvious in advance, we rely on proxies. We ask friends, read reviews, and scan for authority. We use familiarity, logic, and social proof to estimate what is likely to be right.

That’s exactly the territory AI recommendation is now entering — the consideration set problem. If I ask an LLM to find me a reliable accountant for a small business, I’m not asking it to retrieve a blog post. I’m asking it to build me a shortlist. 

Unlike traditional search, the recommendation layer is invisible to brands unless they test for it actively. You don’t see the prompt or the source chain. You don’t even know why one brand made the cut and another didn’t.

But the commercial effect is real, possibly stronger than anything traditional search produced. If you’re in the recommendation set, you’re in the running. If you’re absent, you’ve lost the sale before the conversation started.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Get the newsletter search marketers rely on.


Your website is now an argument for preference

The first practical consequence: your website can no longer function like a polite digital brochure. Despite being optimized for search, many commercial web pages simply:

  • Introduce the company.
  • Gesture vaguely at services.
  • Bury differentiation under generic corporate language.
  • Treat the page as an endpoint for a ranking rather than a persuasive asset.

Still, they’re weak where it matters most: actual selling.

In the Mad Men era of SEO, your landing pages and service pages need to function like sales pages, not in a cheesy direct-response way, but in the strategic sense that they must clearly answer four things:

  • Who is this for?
  • What problem does it solve?
  • Why is it different?
  • Why choose it over the alternatives?

This comes down to positioning, which is key to GEO. If seven brands do broadly the same thing, the model needs distinctions. It needs enough clarity to say: this brand is best for X kind of buyer with Y kind of problem because it does Z better than everyone else.

Your website copy must surface real performance attributes: the specific things you genuinely do better or more distinctively than competitors. Your pages must become machine-readable arguments for preference.

Copywriting is back

Actual commercial copywriting — not fluffy brand storytelling or word count for its own sake — identifies a target customer, sharpens the problem, articulates the value, and makes the offer easy to recommend.

Good copy isn’t optional.

Take a local sandwich shop. The old SEO conversation runs to “best sandwich near me,” local pack, and review acquisition. It’s useful, but limited. 

The GEO version starts with the shop’s actual performance attributes. 

  • Is it the speed? 
  • The handmade bread? 
  • The office catering? 
  • The locally sourced produce?

Those claims must be clear on the website first. Then they need corroboration everywhere else:

  • Reviews that mention the sourdough specifically.
  • A local food blogger’s write-up.
  • Inclusion in “best lunch spots” roundups.

They’re specific, repeated, retrievable evidence of why this shop is the right recommendation for a particular type of customer.

Scale that logic to a B2B software company, and the principle holds. Pages that clearly explain who the product is for, which problems it solves, and why it outperforms rivals. Then build mentions, customer reviews, and gain trade-press coverage — the body of evidence to support recommending you to buyers — and let the AI find it.

That’s pretty much GEO in a nutshell.

Keywords don’t disappear, but they lose their throne

Keywords are a human workaround. Approximations of intent, built for a retrieval system that needed exact string matching. LLMs process fuller context, layered needs, and comparative requirements. They move from keyword matching toward problem understanding.

Keyword research still matters for classic search, paid search, and buyer-intent pages. But the center of gravity shifts.

Instead of asking only “what terms should we rank for?”, the better question is: what attributes make us the right recommendation for the buyer we actually want, and what evidence exists across the web to support that claim?

The future of SEO is starting to look like the old agency model, as the work is increasingly promotional. Once your website clearly expresses your positioning, the challenge becomes promoting that position across the wider web through credible, repeated, relevant signals.

  • Digital PR. 
  • Traditional PR. 
  • Expert commentary. 
  • Case studies. 
  • Reviews. 
  • Listicles.
  • Awards. 
  • Trade press.
  • Brand mentions. 
  • Conference speaking. 
  • Events. 
  • Creator coverage. 
  • Product comparisons. 
  • Original data studies that other people actually cite. 

These are the things you go after, create, and encourage. Sadly, many “AI visibility” conversations flatten this into nonsense.

The goal isn’t merely to have content cited by AI. It’s to gather enough market evidence that AI systems repeatedly encounter your brand in the right contexts, with the right associations.

The work stops being optimization and becomes maximization: building the largest possible volume of persuasive, corroborated, retrievable evidence that your brand is a sensible recommendation for a specific kind of buyer.

That’s a fundamentally different model from anything the SEO industry has been selling. It’s promotional and strategic brand marketing.

Dig deeper: How to design content that AI systems prefer and promote

Where SEO still fits

SEOs need to grow up. There’s still significant value in buyer-intent search, technical site architecture, entity clarity, internal linking, and structured data. SEOs are well placed to monitor recommendation environments, test prompts, and identify where visibility is being won or lost.

But the identity crisis is real. Many agencies were built for a world of rankings, informational blogs, and monthly traffic graphs. They aren’t equipped to lead a world defined by positioning, copy, PR, brand evidence, and recommendation science.

Tracking brand citations inside AI outputs isn’t a complete strategy. It’s a temporary metric. 

The new agency model

Winning agencies look like hybrid commercial strategy firms: part SEO, part copywriting, part PR, part brand strategy, part technical infrastructure. They know how to protect buyer-intent search revenue today while building the fame, clarity, and corroborated authority that earns recommendation tomorrow.

This is the Mad Men model of SEO. Persuasion, positioning, and clear claims backed by public proof matter again. And the job is to become recommended by AI.

Google, Meta, and the long history of misaligned incentives in paid media

7 April 2026 at 16:00
Google, Meta, and the long history of misaligned incentives in paid media

I’m getting a mid-career executive MBA. Last week, in class, we discussed the interaction between automation and advertising. The lecture covered why A/B testing in Meta is less valuable now, since Facebook can auto-optimize faster and better than marketers can on their own.

A classmate took the logical leap and asked the professor, “If digital channels have more data and more processing power, why don’t advertisers just give them a URL and a credit card and let them go wild?”

The argument has real merit. Google, Meta, and LinkedIn have access to more data than any agency ever will. Their optimization engines are improving fast. Handing them a budget and a URL and walking away isn’t entirely crazy.

But that means we’d need to have faith in the channels to optimize media in a business’s best interests, and there’s a long, proud history of that not being the case.

1. The opt-in that wasn’t

About six years ago, we met with a Google rep who pitched a product that introduced broader, more aggressive targeting and bidding. We listened to the pitch and said no. We didn’t want to try it. The reps turned it on anyway.

What happened next was what we predicted. The campaigns spent significantly more money and didn’t generate any additional conversions.

We had to comp the client for the wasted spend, which was bad enough. But what made it worse was the principle of the thing: we hadn’t agreed to this. Google made unauthorized changes to our account.

When I tried to get the money back, Google’s position was that we’d set our campaign budgets at a certain level, and they were within their rights to spend up to that amount. That framing ignores that a budget cap is a ceiling, not an invitation. 

Our agency methodology is to never hit a budget cap. We set those numbers based on the strategy we’d approved, not the one they decided to test. I hounded them for weeks, but never got any resolution. It still makes me angry.

The reps were clearly incentivized to get adoption of the new feature. When it didn’t work, there was no accountability and no recourse. We were left covering the cost of a decision we explicitly declined.

What’s being misrepresented

Budget caps were treated as implicit consent to spend. A product we declined was activated without authorization, and when it failed, the platform pointed to our own settings as justification.

The incentive structure rewarded the reps for turning it on. There was no corresponding mechanism to make the advertiser whole when it didn’t work.

Dig deeper: Google rep’s unauthorized ad changes spark advertiser concerns

2. The profit maximization pitch

This was years ago for a successful retainer. A pair of senior Google reps sat across from us and asked what our client’s gross margin was. Around 50%, we said. They went to the whiteboard and wrote out: if overall revenue/2 – overall media cost >= 0, then we should keep spending money on ads.

On the surface, the math sounds right. In practice, it has two problems.

  • It assumes the reported conversions are incremental, meaning they wouldn’t have happened without the paid ad. A substantial portion of any Google campaign’s reported conversions, particularly in brand and retargeting, are users who were already going to convert.
  • The model assumes a flat cost curve, where the 500th conversion costs the same as the 50th. It does not. Marginal returns fall as you scale. The last dollars of spend are always the least efficient, but they’re exactly what this pitch is designed to help Google access. (They should have said marginal revenue/2 – marginal cost = 0 is profit maximization.)

What’s being misrepresented

The model treats all reported conversions as incremental and assumes cost per conversion is constant across spend levels. Both assumptions are wrong, and together they can justify significant overspend.

3. The ‘higher CPCs buy better clicks’ pitch

This one still happens all the time. The pitch is that if you raise your CPCs, you’ll get access to higher-quality traffic. The implied logic is that conversion rate is influenced by CPC, and that if your investment isn’t high enough, you’re missing the best clicks.

There’s a version of this that has some truth to it. Higher CPCs can mean higher ad positions, which can mean higher impression frequency against the same users. More frequency can drive higher aggregate conversion rates, because repeated exposure matters.

But the argument glosses over the other side of that equation. 

  • Higher frequency has diminishing marginal returns. 
  • The third impression is worth less than the first. The tenth is worth a lot less.
  • The cost curve isn’t flat. You’re paying more per click at every step.

In practice, raising CPCs to chase quality traffic is almost always correlated with substantially worse overall return on ad spend.

This is a variant of the marginal return problem seen across these cases. The pitch frames the upside without acknowledging the cost curve. More spend gets positioned as access to better outcomes, when it often delivers the same outcomes at a higher price.

What’s being misrepresented

CPC and conversion rate are presented as if higher bids unlock better traffic. In most cases, the incremental cost outpaces the incremental return. The pitch frames diminishing returns as an opportunity, rather than a constraint.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

4. The learning phase as a get-out-of-jail card

“If your Meta campaigns are underperforming, it’s because the algorithm just needs more time to learn.”

“Don’t make changes, and don’t reduce budget, just give the platform more data.” 

This is sometimes true. Machine learning systems need volume to optimize effectively, and premature intervention can reset progress.

But “it needs to learn” has become a catch-all explanation that’s almost impossible to disprove in the short run. It explains away poor CPAs, delays accountability, and keeps spend flowing when a reasonable advertiser might otherwise pull back and reassess.

There’s rarely a clear definition of when the learning phase ends, which makes it a moving target. The learning phase ends when performance improves. If performance doesn’t improve, more learning is prescribed.

What’s being misrepresented

A real technical concept is being used in ways that resist falsification. When there’s no defined endpoint and no stated criteria for success, “it needs to learn” serves as a blank check for budgetary continuity.

5. The metric pivot: When conversions fail, sell sentiment

In many cases, YouTube or display campaigns aren’t driving measurable conversions. The rep’s suggestion: let’s look at brand measurement. We can measure recall rates, positive sentiment, and intent to purchase. These are real signals of brand health, and they matter in the long run.

But the shift from conversion to sentiment metrics tends to occur when conversion metrics are poor, not as a principled measurement strategy. Brand lift surveys measure awareness under controlled conditions, but they rely on self-reported intent and don’t connect to downstream revenue.

Recall is almost never translated into a cost per point of lift that can be compared across the media plan. You end up with a number that’s positive and presented as evidence of success, with no agreed-upon framework for what sufficient lift would look like.

What’s being misrepresented

A softer metric is substituted for a harder one after the harder one fails. Brand lift is a legitimate measurement tool when defined upfront as a success criterion. Introduced afterward, it functions as a consolation prize.

Dig deeper: PPC mistakes that humble even experienced marketers

Get the newsletter search marketers rely on.


6. Upper funnel combined with lower funnel for a blended average

Upper-funnel and lower-funnel campaigns serve different purposes and perform differently on a cost-per-acquisition basis. When a channel reports blended CPA across all campaign types, an average that looks acceptable can hide the fact that some portion of the media plan is wildly inefficient at the margin.

The argument for blending is that upper-funnel spend creates the conditions for lower-funnel performance. That is plausible, but plausibility isn’t the same as demonstrated causality. 

Often, it’s assumed the upper funnel is directly contributing and that, in aggregate, the system is profitable and fully incremental. This is never the case.

What’s being misrepresented

Aggregate CPA can look fine while specific segments of spend have no measurable return. Blending is a reporting choice, and it can obscure where money is and isn’t working.

7. View-through conversions: The numbers that shouldn’t count

A view-through conversion is counted when a user sees an ad, doesn’t click it, and then converts within some attribution window, often 24 hours or more. Platforms report these alongside click-through conversions by default. 

For retargeting campaigns, which by definition serve ads to people who have already visited your site, view-through attribution is particularly problematic. These users were likely going to return and convert regardless. The ad may have had nothing to do with it.

The issue isn’t that view-throughs aren’t meaningful. For a cold audience, some brand-influenced conversions happen without clicks.

The issue is that those conversions are almost never broken out proactively (you have to ask). And when you remove view-throughs from retargeting campaigns, the ROAS numbers can change dramatically. 

We’ve seen cases where removing VTAs cuts reported conversions by more than half. I would note that by moving to incremental measurement options, Meta has become substantially more transparent.

What’s being misrepresented

View-through conversions inflate reported performance, particularly in retargeting, where incrementality is already low. Default reporting includes them without flagging the methodological problem.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

8. The competitor benchmark as a spending lever

This one is a pattern. A channel rep brings industry benchmark data to a meeting showing that your competitors are spending at a level above your current budget. The implication is clear: you’re being outspent, and you should close the gap.

Industry benchmarks are among the most valuable inputs a channel can provide. Knowing where you sit relative to the market is useful context for planning. The problem is how they get deployed. More often than not, benchmark data shows up as a tool to expand media spend, not as a neutral input into strategy.

And it works. CEOs and CMOs are particularly susceptible to this framing. Nobody wants to hear that a competitor is outspending them.

The emotional pull of “they’re investing more than you” is hard to counter with a measured conversation about marginal returns or strategic fit. The benchmark becomes the argument, and the argument is almost always “spend more.”

What gets lost is any discussion of whether:

  • The competitor’s spend is actually working for them.
  • Your business model and margins support the same level of investment.
  • The benchmark even reflects an apples-to-apples comparison.

Competitive spend data without context is just a number that makes your budget feel inadequate.

What’s being misrepresented

Benchmark data is real, but it’s selectively introduced to justify budget increases rather than treated as one input among many. The framing skips over whether the comparison is meaningful and relies on competitive anxiety to sell.

9. The default settings trap

This one is hard to frame as a single incident because it’s everywhere. I’ve talked to so many people trying to break into the industry, or launch their first campaigns, and the story is almost always the same. 

They follow the platform’s setup guide, accept the default settings, and end up opted into programs that have close to zero chance of being successful.

This is true across pretty much every major channel. 

  • LinkedIn defaults you into audience network inventory that runs outside the LinkedIn feed. 
  • Google opts you into display inventory when you’re trying to run search. Broad match keywords are set way too far out of the box. Suggested CPCs are astronomical. 
  • Google’s geographic targeting defaults to “presence or interest” rather than actual location. 

Each of these defaults, taken individually, could be defended as a reasonable starting point. Taken together, they create a setup that maximizes the platform’s revenue from day one, before the advertiser knows what’s happening.

A new advertiser following the guided setup is accepting a configuration that the platform designed, and the platform’s incentives aren’t aligned with efficient spend.

This one is genuinely difficult to solve. Platforms need to provide default settings, and they can’t expect every new advertiser to understand every option. 

But there’s something predatory about the gap between what people think they’re signing up for and what they’re getting. The defaults are revenue-optimized for the channel, not performance-optimized for the advertiser.

What’s being misrepresented

Setup guides and default settings are presented as best practices when they’re actually configurations that favor the platform’s revenue. New advertisers trust the guided experience, and have no reason to suspect the defaults are working against them.

Dig deeper: Are you being manipulated by Google Ads?

10. The tracking gap as a faith exercise

Privacy regulations and platform changes have created real limitations in conversion tracking. GDPR and Apple’s App Tracking Transparency aren’t invented problems. 

We have less visibility than we used to, and the platforms have responded by layering probabilistic modeling and modeled conversions on top of deterministic tracking.

But the tracking gap has also become a convenient shelter for underperformance. The argument goes like this:

  • “The conversions are happening, we just can’t see them all yet. There’s latency in the data.”
  • “There are limits to what can be tracked. We need a longer attribution window.”
  • “We need more time for the modeled data to populate. And in the meantime, here are some proxy metrics that we think are directionally valid, so let’s keep pushing.”

Each of those can be true in isolation. Modeled conversions take time to appear. Attribution is harder than it was five years ago. Proxy metrics can be useful when direct measurement breaks down. 

The problem is when all of these caveats get stacked together and used to justify sustained spend in the absence of any measurable result. At some point, “the data will come in” stops being a reasonable expectation and becomes an article of faith.

The tracking gap is real, but it cuts both ways. If you can’t measure the result, you also can’t prove the spend is working. The platform’s default position is to assume it is, and keep going. The advertiser’s job is to ask what happens if the modeled conversions never materialize, and what the fallback plan looks like if they don’t.

What’s being misrepresented

Legitimate tracking limitations are used to defer accountability indefinitely. When measurement is hard, the platform’s recommendation is always to maintain or increase spend, never to reduce it. The uncertainty gets resolved in the channel’s favor by default.

What does this mean for AI-run campaigns?

None of this is an argument that agencies are irreplaceable in their current form. We used to question tCPA, and now it’s a preferred bidding strategy. Automation handles execution-level work that used to require skilled practitioners. In-house teams are viable for more companies than they used to be.

But the argument for fully autonomous, channel-run advertising assumes the channel will optimize for your outcomes rather than revenue. Even if we imagine new profit-sharing contracts, this assumption carries real risk.

And I’m not blaming reps or the channels. They believe in their products, but they’re also measured on metrics that create a predictable drift in how they frame data. I should note that agencies struggle with misaligned incentives as well.

The advertiser’s job, with or without an agency, is to keep asking the inconvenient questions.

  • What is the marginal return at this spend level?
  • What percentage of conversions are view-throughs?
  • What does performance look like if we exclude brand search?
  • Are we measuring incrementality, or are we measuring correlation, and calling it causation?

Maybe the answer to everything is eventually full automation. But the entity building the machine shouldn’t be the one telling you when it’s ready.

How marketing leaders are getting unstuck from Salesforce by Stitch

7 April 2026 at 15:00

For years, Salesforce Marketing Cloud was the safe choice.

Powerful. Enterprise. Trusted.

But lately, we’re hearing something different:

  • “Our data is too tangled to activate.”
  • “We’re locked into contracts.”
  • “We’re stuck sending the same emails on repeat.”
  • “Everything is Band-Aids and duct tape — I don’t know how we can move without breaking everything.”
  • “We feel stuck.”

Sound familiar? If so, this fireside chat is for you.

We’ve helped dozens of brands migrate off Salesforce and into modern, composable engagement architectures built for real CRM performance. Not because it’s trendy — but because marketers needed more speed, flexibility, and innovation.

In this April 14 session, we’ll cover:

  • Why brands feel stuck (and why it’s more common than you think).
  • What’s happening inside the Salesforce ecosystem.
  • The biggest misconceptions about migrating.
  • Understanding the martech landscape.
  • What life actually looks like after moving to a modern platform like Braze.
  • How CMOs and martech leaders should think about platform decisions over the next 3 to 5 years.
  • How to get the rest of your org on board with making a move.
  • The steps to take now to set yourself up for migration success.

To be clear: this isn’t a Salesforce-bashing session.

It’s a candid conversation about innovation velocity, marketing ownership, and what the next era of marketing actually requires.

Join us

Disclaimer: To ensure a candid and open conversation, the live session is open only to brand-side marketing leaders. Registrants who are not verified brand-side marketing leaders will not be permitted to attend the live session. However, the recorded session will be made available to all registrants upon completion of the event.

Are low-quality listicles about to lose their edge in Google Search?

6 April 2026 at 22:40
Google hammers listicles

If you rank your own product #1 in “best of” listicles, it’s not just a search-quality issue — it may violate FTC rules that took effect in October 2024.

Driving the news. As Lily Ray noted on LinkedIn, the FTC’s Consumer Review Rule (16 CFR Part 465) prohibits several deceptive practices tied to reviews and testimonials, including:

  • Presenting company-controlled content as independent reviews.
  • Publishing reviews of products or services never actually used.
  • Attributing reviews to people who didn’t write them.

Penalties can reach up to $53,088 per violation, and each page may count separately. Ray also shared a reference table she generated with the help of Claude:

Why now. “Best X” and “Top 10 Y” listicles have surged as a GEO tactic over the past couple of years. These pages often perform well in search and increasingly influence AI-generated answers.

The backstory. Before the rule was formalized, Ray said at least one company faced legal action for publishing hundreds of “best of” pages that:

  • Ranked its own services #1.
  • Included fabricated competitor reviews.
  • Used fake reviews on third-party platforms.

The Better Business Bureau later censured the company for unsubstantiated claims.

What’s happening. Many modern listicles follow a similar pattern:

  • A brand publishes a “best tools” list.
  • Includes competitors it hasn’t tested.
  • Uses subjective or invented scoring systems.
  • Ranks itself #1.

These listicles may imply independence or firsthand evaluation when neither exists.

The nuance. You can publish comparison content that includes your own product. However, based on FTC guidance, risk increases when:

  • You imply objectivity, but promote your own product.
  • You present reviews not based on real experience.
  • You fail to clearly disclose material relationships.

What Google is saying. Google is aware of the low-quality listicle trend. In a statement to The Verge, a Google spokesperson said the company applies protections against manipulation in Search and Gemini, and reiterated its guidance: create content for people and ensure it’s understandable to search systems.

Why we care. What has worked as a visibility tactic may carry risk on two fronts — regulators and a potential Google Search algorithm change. That means this popular GEO tactic could decline quickly as its effectiveness drops.

Caveat. I’m not a lawyer. Consult your own legal counsel if you’re concerned about using this tactic.

Before yesterdaySearch Engine Land

Human content is 8x more likely than AI to rank #1 on Google: Study

6 April 2026 at 21:54
Human vs AI content Google Search

Human-written content dominates Google’s top rankings, appearing in the No. 1 position 80% of the time versus just 9% for purely AI-generated pages, based on a Semrush analysis of 42,000 blog posts.

The details. Semrush analyzed 20,000 keywords and their top 10 results, classifying content with an AI detector.

  • Human-written pages outperformed AI and mixed content across all top 10 positions.
  • The gap was widest at Position 1, where human content was 8x more likely to rank.
  • AI content appeared more often lower on Page 1, nearly doubling from Positions 1 to 4.

Yes, but. AI detection tools are widely known to be inconsistent and can misclassify human and AI-written content, creating some possible “fuzziness” in these classifications.

Why we care. AI-generated content works, until it doesn’t. Yes, AI can help you rank, but this data suggests human insight still drives the best performance. For competitive queries, originality, expertise, and editorial judgment remain your unfair advantages.

Perception vs. data. 72% of SEOs said AI content performs as well as or better than human content, yet ranking data showed a clear human advantage at the top.

How teams use AI. No surprise, AI is widely adopted and often used in a hybrid approach:

  • 87% of teams keep humans heavily involved in content creation.
  • 64% use a human-led, AI-assisted workflow.
  • AI is most common in research, drafting, and optimization.
  • Use drops sharply for multimedia, localization, and higher-judgment tasks.

What’s driving adoption. AI accelerates output, but doesn’t reliably improve it.

  • 70% cite faster production as AI’s top benefit.
  • Only 19% say it improves content quality.

About the data: The analysis examined 42,000 blog pages from 200,000 URLs tied to 20,000 keywords, using GPTZero to classify content. It also includes a survey of 224 SEO professionals working in content and search.

The study. Does AI content rank well in search? [Survey + Data study]

Bing, not Google, shapes which brands ChatGPT recommends

6 April 2026 at 20:03
Bing fanout queries ChatGPT

In this case study, we went deep instead of broad. We focused on one question: why wasn’t a brand present in a single ChatGPT prompt across ~70 iterations?

We chose one prompt: “What are the best hotels in New York City?” 

We analyzed mentions, citations, fanouts, and SERPs in Google and Bing. We also planned to analyze GPT memory, but it made no discernible difference to mentions, citations, or fanouts.

What we did and what we found

We chose NYC hotels because it’s a crowded, mature market with juggernauts and up-and-comers. We also have no connection to the NYC luxury hotel space — we intentionally picked an area where we could stay objective and learn from scratch.

After running the prompt “what are the best hotels in New York City” 68 times, we identified which hotels appeared most consistently and which were nearly invisible.

We chose the Baccarat Hotel as our “client” because it appeared only once (1.5% of the time), despite strong reviews and clear alignment with the prompt’s intent. We wanted to know why — and whether it could change that.

Key findings:

  • You can dominate query fanouts on Google SERPs and still underperform in ChatGPT brand mentions.
  • Bing matters most. Ranking in Bing articles for fanouts aligns more directly with ChatGPT mentions — not just citations.
  • In verticals dominated by third-party content, you face complex digital PR paths to increase visibility.

Note: A full methodology breakdown appears in the appendix.

Mentions of the Baccarat vs. the Fifth Avenue Hotel show just how wide the disparity in ChatGPT visibility can be

The Baccarat Hotel appeared once in 68 trials (1.5%).

Top performers were large luxury hotels like the Four Seasons Hotel New York Downtown.

ChatGPT also identified boutique hotels as a subcategory, generating a secondary list in its answers. Boutique hotels like the Baccarat are typically smaller and not part of large chains.

Within this boutique subcategory, the Baccarat still underperformed. The Fifth Avenue Hotel, the top-performing boutique property, appeared 13 times, cited 20% of the time, versus the Baccarat’s 1.5%.

Reputation can’t explain visibility disparities

We first checked whether anything in the hotel’s history or reputation could explain the gap. As the chart below shows, nothing significant did:

The Baccarat The Fifth Avenue
Year Founded20152023
Current Price$930$563
Number of Google Reviews1.3k213
Google Reviews Rating4.64.6
Number of Expedia Reviews531201
Expedia Reviews Rating9.49.6

Overall, the Baccarat has been around longer and has more reviews. On quality, the Fifth Avenue Hotel has no edge in Google reviews and only a slight edge in Expedia reviews. The only area where the Baccarat lags is price — but that’s unlikely the issue when The Ritz-Carlton, a consistent non-boutique winner, is listed at $1,100.

Further reinforcing the Fifth Avenue’s underdog status: one of its most prominent Google results (rank 2) was a Wikipedia page for a different Fifth Avenue Hotel that closed in 1908, creating potential entity confusion similar to the two Danny Goodwins.

If the Fifth Avenue Hotel had been the one missing, it would suggest a less established brand with entity confusion. But the opposite happened — it prevailed in ChatGPT.

So what was the problem for the Baccarat Hotel?

Winning Google SERPs for query fanouts doesn’t help, but winning Bing SERPs does 

When ChatGPT performs a web search, it sends a series of queries you can extract via Chrome DevTools. In this case study, examples included:

  • [Best hotels in new york city]
  • [Top rated luxury hotels in new york city recommendations]
  • [Best hotels in nyc top luxury and boutique hotels new york]
  • [Best luxury and boutique hotels in new york city recommendations reviews]
  • [Best hotels in new york city nyc top hotels]
  • [Top hotels in nyc luxury boutique best places to stay new york city]

In total, we extracted 25 unique query fanouts.

What we saw in the Google SERPs

If we only looked at the articles dominating fanout SERPs in Google, we’d expect the Baccarat to narrowly outperform the Fifth Avenue in ChatGPT. That didn’t happen.

In the table below, the Baccarat “wins” three of the top 10 most frequently appearing pages, while the Fifth Avenue Hotel “wins” two. The other five feature neither. A “win” means one of the following:

  • Appearing when the other does not.
  • Appearing higher on the page.
  • Having more positive sentiment.

The data:

URLWho Wins?Notes
https://www.forbestravelguide.com/destinations/new-york-city-new-yorkThe BaccaratThe Baccarat Hotel is #4 on the list, the Fifth Avenue Hotel is #13 and sits far below the fold
https://www.mrandmrssmith.com/destinations/new-york-state/new-york/hotelsNeitherNeither Hotel appears on this list
https://guide.michelin.com/us/en/article/travel/the-best-hotels-in-new-york-all-the-michelin-key-hotels-in-the-cityThe Fifth AvenueThe Baccarat is listed as a “one key” hotel, placing it at the bottom of the list. The Fifth Avenue Hotel  is listed as a “two key” hotel, placing it in the middle of the list.
https://youshouldgohere.com/2025/01/best-boutique-hotels-new-york-city/NeitherNeither Hotel appears on this list
https://travel.usnews.com/hotels/new_york_ny/The BaccaratThe Baccarat #11 on the list, the Fifth Avenue Hotel #16
https://luxlifelondon.com/best-hotels-manhattan-new-york-city/NeitherNeither appears on this list
https://www.tripadvisor.com/Hotels-g60763-New_York_City_New_York-Hotels.htmlNeitherNeither Hotel appears on this list
https://www.lartisien.com/hotels/united-states/new-yorkThe BaccaratThe Baccarat is #5, the Fifth Avenue is #15
https://www.cntraveler.com/gallery/readers-choice-awards-new-york-city-hotelsNeitherNeither Hotel appears on this list
https://www.reddit.com/r/chubbytravel/comments/1n7jro1/which_luxe_hotels_are_people_loving_in_new_york/The Fifth AvenueBoth mentioned, but the Fifth Avenue much more positively

What we saw in the Bing SERPs

By contrast, looking only at the articles dominating fanout SERPs in Bing, we’d expect the Fifth Avenue to outperform the Baccarat in ChatGPT — and it did.

In the table below, the Fifth Avenue “wins” five of the eight most frequently appearing URLs.

Note: The table includes two fewer URLs because Bing SERPs were slightly less diverse for these fanouts.

The data:

URLWho Wins?Notes
https://www.forbes.com/sites/forbes-personal-shopper/article/best-hotels-in-new-york-city/NeitherNeither appears on this list
https://www.timeout.com/newyork/hotels/best-luxury-hotels-in-nycThe Fifth AvenueThe Fifth Avenue is #1, The Baccarat is #16
https://robbreport.com/travel/hotels/lists/best-luxury-hotels-new-york-city-1237348563/The Fifth AvenueThe Fifth Avenue is #5 (but also wins the hero image/caption), the Baccarat is #11
https://www.cntraveler.com/story/best-boutique-hotels-nycThe Fifth AvenueThe Fifth Avenue appears, the Baccarat does not
https://www.travelandleisure.com/best-hotels-in-new-york-city-8612778The BaccaratThe Baccarat appears, the Fifth Avenue does not
https://www.tripadvisor.com/Hotels-g60763-zff12-New_York_City_New_York-Hotels.htmlThe Fifth AvenueThe Fifth Avenue appears, the Baccarat does not
https://www.cntraveler.com/gallery/best-hotels-in-new-york-cityThe Fifth AvenueBoth are listed, but the Fifth Avenue is listed under “Our Top Picks”
https://travel.usnews.com/hotels/new_york_ny/The BaccaratThe Baccarat is #11 on the list, the Fifth Avenue is #16

The connection between Bing visibility and brand mentions

Bing rank strongly predicts ChatGPT citations — 87% align with Bing’s top results, Seer Interactive found. Our case study supports this and extends it.

We examined the relationship between fanouts (Seer focused on prompts) and brand mentions.

Example mention: “For a luxury boutique feel: listings like The Fifth Avenue Hotel or Crosby Street Hotel consistently make ‘top NYC’ lists from travel editors.”

Mentions are often more valuable than citations. Most people won’t follow citations but will remember the top recommendation.

There’s ongoing debate about whether fanouts shape ChatGPT’s answers and mentions, or simply support answers generated from training data. For example, Leigh McKenzie argued on LinkedIn:

  • “The citations you see at the bottom? Those are surfaced after the answer is generated, not before. It’s post-hoc rationalization. The model didn’t choose your brand because it found your URL. It generated an answer based on what it already knows, then pointed to sources that support it.”

By contrast, our data aligns with Beehiiv’s research, which suggests citations do shape mentions.

Training data doesn’t appear to be the issue for the Baccarat. Compared to the Fifth Avenue, it’s older, has more reviews, and holds similarly high ratings across major platforms. What it lacks is strong presence in Bing results for fanouts and citations, which appears to lead to fewer mentions.

A simple flow might look like this:

  • Brand ranks in Bing → ChatGPT fanouts pull in Bing pages → ChatGPT synthesizes training and Bing data to generate mentions

Coda: A tale of two Forbes articles, or why the details matter

In this vertical, third parties like Forbes and Condé Nast control the space. Visibility depends on who mentions you, so you need a strong outreach strategy — not just updates to your own content.

Our data shows that “targeting Forbes” isn’t specific enough.

The top result surfaced in both Bing and ChatGPT was the same Forbes article. In Google, the most frequent fanout result was also a Forbes article — but a different one.

As we’ve seen, getting into Google’s Forbes article likely wouldn’t provide a meaningful boost. The Baccarat “won” in that piece.

Getting into Bing’s Forbes article, where the Baccarat wasn’t mentioned, could make all the difference. This requires a highly surgical approach grounded in Bing data.

Generalities won’t work; detail reigns supreme.

Appendix: Methodology

Model: We prompted GPT-5.2 Instant and manually extracted results. We didn’t use APIs within ChatGPT.

Number of iterations: We ran the same prompt 68 times.

Prompt: “What are the best hotels in New York City?”

Settings: We tested three memory states:

  • Saved memories off
  • Saved memories on, using unrelated real user memories
  • Saved memories on, with one memory about needing gluten-free travel accommodations

For all trials, we turned off “reference chat history” to avoid interference across iterations.

We expected differences based on memory settings but found none, so we treated all trials as a single dataset.

What we extracted:

  • All query fanouts.
  • Full ChatGPT text output.
  • Citations.
  • Google SERPs for all fanouts.
  • Bing SERPs for all fanouts.

SEO in 2026: Higher standards, AI influence, and a web still catching up

6 April 2026 at 19:00
SEO in 2026: Higher standards, growing AI influence, and a web still catching up

Is it possible to get an accurate view of the current state of SEO?

There have been multiple attempts to reach consensus on what works, predict what might be coming, and identify the factors that may play a role in “good” (or “bad”) SEO.

As useful and productive as some of this may be, none of it offers the same grounded data as the Web Almanac, a project I was honored to be a part of. With the publication of the 2025 SEO chapter, we can now review the data and spot the emerging trends from 2025 and what that could mean for SEO in 2026.

SEO standards on the rise

2025 has been another year of increasingly higher SEO standards — which can only be a good thing:

  • Near-universal adoption of HTTPS (now up to 91%+).
  • Increased use of title tags at nearly 99% adoption, and even viewport meta tags at over 93% adoption.
  • Canonical adoption rose from 65% in 2024 to 67%+ in 2025.
  • HTML validity is slowly improving. For example, invalid <head> elements dropped to 10.1% on desktop and 10.3% on mobile from 10.6% and 10.9%, respectively, in the previous year.
  • Robots.txt error rates fell404s declined to 13% from 14% the previous year, and 5xx responses fell to ~0.1%.
  • Meta robots usage has crept up to 46.2% in 2025 from 45.5% the prior year.

Not all of these statistics represent rapid change, but they do show steady and consistent change, at the very least. The 2025 Web Almanac data presents the web as a more secure and easier-to-crawl place, which is certainly a positive. 

So, can SEOs take a victory lap right now? No, as there is more to do in 2026, even if the basics do feel like they’re stable or steadily improving.

The cementing of SEO ‘defaults’

Content management systems (CMSs) and SEO plugins play a huge role in developing SEO best practices and cementing the “default” or de facto standards.

As the CMS chapter in the 2025 Web Almanac shows, more and more websites are now powered by a CMS:

Of these, the top five most popular systems over the last four years likely aren’t surprising.

Frequently underpinning many SEO defaults are SEO tools typically utilized by WordPress sites:

That’s not to say that using these platforms or tools ensures a perfect website setup. That said, key elements or functions of these tools can become industry standard due to their ubiquity:

  • Robots.txt.
  • Sitemap.xml.
  • Canonical tags.
  • Semantic HTML.
  • Structured data.

Not all of these are on by default. Sometimes they require inputting basic details or simple implementation. Regardless, their ease of access increases the likelihood that they will become an SEO best practice.

This is happening, and it’s proving effective. What this means for 2026 and beyond is that:

  • Working with or lobbying major platform and tool makers is one of the key ways to shape SEO’s future direction.
  • SEO tools and platforms will continue to enforce best practices on the front end, but they could also benefit from AI and assistive features behind the scenes. While it may be less visible in the data itself, these tools offer the opportunity to move quickly and gain deeper insight.
  • Structured data usage was previously driven by what Google rewarded in the search engine results pages (SERPs). SEOs and plugin developers alike could be inspired to move beyond what’s beneficial for the SERPs and onto what contributes to a more predictable, structured, and retrievable data set.

Deprecated, but not forgotten

Defaults and best practices help, but they don’t finish the job. While attention often shifts to new features, old or forgotten standards still see widespread use.

There have been many different cases where deprecated settings or standards have prominently appeared in the data.

  • For example, in meta robots bot declarations, “msnbot” is still in the top 5, even though it was replaced over 16 years ago
  • AMP use has plummeted over the years, but it’s still found on over 38,000 homepages. While technically not deprecated, amp.dev has seen no recent activity for nearly four years now.
  • The most common meta robots attributes are “index” and “follow,” which are implicit and largely ignored.

Web changes — no matter how small — are often neither quick nor easy to get done, and we’ll likely see traces of deprecated features and settings in the data for years to come.

More work is needed

The improvement in SEO standards doesn’t apply to all features and sites. There are some that aren’t moving in the same direction:

  • The mobile performance gap stubbornly lingers — even as it continues to improve.
  • Duplicate content management is still lagging, with nearly 33% of pages missing canonical implementation.
  • Advanced configurations have barely moved from the previous year — nearly 67% of images don’t have loading attributes set, and over 91% of iframes don’t have set loading attributes.
  • Many deprecated standards refuse to go away.

While CMS default settings or configurations can take credit for some of the larger changes, they also bear some of the responsibility for the issues above. For example, median Lighthouse scores for some of the major CMS platforms are still lagging, especially on mobile (while seeing increases over last year).

The long tail of the web is still messy, and this will probably always be the case. The Web Almanac dataset doesn’t exclude websites that are no longer relevant or abandoned.

Site metrics that meet the “top” standards from an SEO best practices point of view can likely be achieved with an out-of-the-box site built on any major CMS with a modern theme and 30 mins of carefully considered configuration. This is one of the most significant opportunities in technical SEO.

In 2026, we’ll likely:

  • Continue to see performance gaps converge between desktop and mobile experiences — but slowly.
  • Still be able to see echoes of past markup and decisions. Even if the collective focus is pulled to the “new world” of AI search, many SEOs won’t abandon proven tactics and approaches from past years. This dataset develops slowly.
  • Observe something that’s mostly “business as usual.”

Get the newsletter search marketers rely on.


Charting the impacts of AI

One of the more eagerly awaited elements of the Web Almanac data was whether we can chart the increasing presence and impact of AI search and crawlers in the decisions of SEOs and developers.

Within the data, we observed two major developments:

  • Robots.txt is increasingly used more as a policy document rather than crawler control.
  • Creation and adoption of llms.txt is one of the few signs of LLM-first decision-making.

Commenting on the state of SEO is challenging because the definition isn’t fixed. What’s good or bad practice is often hotly debated, and in the world of AI search, another (painful) metamorphosis is now taking place.

In the HTTP Archive data we can observe the influences working on SEO from a “nuts and bolts” point of view, report on what we see, and enable people to make up their own minds.

Specifically, one of the elements we added this year was the analysis of the llms.txt file. 

This is a highly controversial text file, but our inclusion was not an endorsement. It’s a recognition that changing trends may (or may not) shape the web. Whether it’s effective or accepted, its adoption says something, and we felt it was important to review that.

Robots.txt as a bouncer

It’s clear that robots.txt has a more important job now than ever. Until relatively recently, it was largely used for targeted control of crawlers, particularly Googlebot and Bingbot. 

For most SEOs, however, robots.txt was mostly an exercise in both ensuring we weren’t blocking anything by accident and resolving problem areas with Disallow rules. This has changed:

  • Gptbot: 4.5% on desktop and 4.2% on mobile in 2025 is up from 2.9% on desktop and 2.7% on mobile in 2024, representing a ~55% increase.
  • Ccbot: 3.5% on desktop and 3.2% on mobile in 2025 is up from 2.7% on desktop and 2.4% on mobile in 2024.
  • Petalbot: 4.0% on desktop and 4.4% on mobile in 2025 (not separately tracked in 2024).
  • Claudebot: 3.6% on desktop and 3.4% on mobile in 2025 is up from 1.9% on desktop and 1.6% on mobile in 2024, nearly doubling.

Robots.txt isn’t the only way to manage bots — and arguably isn’t the best — but it introduces a new decision that must be made: How should websites handle LLM crawlbots?

This will be one of the biggest areas we’ll see change in on the technical side of 2026:

  • Businesses with existing bot strategies will need to evolve them.
  • Businesses that don’t meaningfully manage crawlers will start feeling the pressure to do so.
  • Robots.txt will still be the clearest and easiest way to handle crawlers. We will almost certainly see more good and bad bots alike.

In 2026, SEOs will be drawn into bot management conversations spanning marketing, technology, and security. “Which bots should we allow?” is a question with downstream effects on budgets, revenue, and users, and we’ll need to closely monitor what develops.

LLMs.txt

LLMs.txt is an aspiring web standard that aims to guide LLM crawlbot behavior and make it easier for them to retrieve content before generating an answer. It’s a highly controversial .txt file, and there’s a vigorous debate on whether it actually benefits LLMs, will gain widespread use, and is a possible vector for manipulation.

The rationale or efficacy of this file isn’t something we need to cover here. For this article, the true point of interest with llms.txt is the adoption of this file as a statement of intent. 

At the start of 2025, I crawled the Majestic Million, a regularly updated list of the top 1 million websites ranked by backlink authority, in search of llms.txt and found that adoption was extremely low (0.015% of sites, or just 15). 

While searching one million sites versus 16 million presents some logistical differences, I was expecting a very low level of adoption based on prior experience. I was surprised at how wrong I was.

According to the 2025 data, just over 2% of sites had a valid llms.txt file, and:

  • 39.6% of llms.txt files are related to All in One SEO (AIOSEO)
  • 3.6% of llms.txt files are related to Yoast SEO

This number is still relatively low, but it’s much higher than I thought it would be and potentially represents a huge acceleration.

The primary reason fueling adoption of llms.txt’s SEO plugins that make this easier to enable. 

We can see that llms.txt adoption has continued to rise ever since we started collecting data from across the web:

If, however, the implementation of this file is actually a default feature in some scenarios, it could be easy to overvalue its significance.

LLMs.txt will still be a barometer of AI search decision-making in 2026:

  • More tools and plugins will offer this functionality if they don’t already.
  • Yoast and Rank Math (which don’t default llms.txt to “on”) represent more growth opportunities for this file. Many SEOs may decide to switch it on even if there isn’t strong evidence of its efficacy.
  • The rate of adoption will continue to climb, but whether it’ll reach a point where it becomes an accepted best practice is harder to forecast.

FAQ growth

Another interesting trend worth discussing is the increase in the use of the FAQPage schema. 

While this isn’t as explicit a trend as robots.txt or llms.txt usage, the increased adoption of this schema type is particularly interesting.

Since Google said it was limiting the appearance of FAQ snippets in search results, you’d be forgiven for thinking the implementation of this schema type might plateau — or even fall.

However, you can see from the last three publications of the Web Almanac that this isn’t the case:

The use of FAQPage schema is now an emerging trend as AI search heavily cites FAQ content in its outputs.

This could be correlation rather than causation, but the steady increase in FAQPage schema is a strong sign of AI search strategies changing the shape of the web.

To echo another conclusion from earlier, 2026 may well see continued growth of structured data types even if they don’t result in an obvious improvement. While the growth is unlikely to be explosive, making a case for their implementation is easier when we don’t just optimize for Google.

Not a rewrite: A new layer on top of SEO

Will AI search reshape the web in 2026? Unlikely. Will we continue to see signs of its importance? Almost certainly, but let’s not get carried away. 

SEO has a reputation for changing quickly. Sometimes that’s true. More often, it’s the conversation that moves quickly, while the web itself changes at a steadier pace.

The 2025 Web Almanac data clearly reflects that tension. Core SEO hygiene continues to improve year over year, but largely through default features and settings, tools, and platform behavior rather than deliberate optimization.

At the same time, long-deprecated standards linger, advanced configurations remain uneven, and the long tail of the web remains untidy. Progress is real, but it’s incremental — and sometimes accidental.

What has shifted meaningfully is intent.

  • Robots.txt is no longer just crawl housekeeping. It’s becoming a policy surface.
  • LLMs.txt, regardless of whether it proves useful, represents a new class of decision-making entirely.
  • FAQ patterns are on the rise again, and not because of SERP features, but because structured, extractable answers have immense value elsewhere. 

2026 will not be remembered as the year SEO ended or was reborn. It may, however, be considered the year the AI search layer became more defined. A new patch applied — not a fundamental rewriting.

For a deeper dive into the data behind these trends, explore the 2025 Web Almanac SEO chapter.

How to design content that AI systems prefer and promote

6 April 2026 at 18:00
How to design content that AI systems prefer and promote

Most guidance on optimizing for AI still focuses on how content is written. But AI systems don’t read content the way humans do. These systems extract information, break it into parts, and reuse it in new contexts. What matters is whether your content can be pulled into an AI-sourced answer cleanly.

Where traditional SEO has centered on ranking pages, AI systems prioritize retrievable units of meaning. That changes how content needs to be built:

  • From pages → passages
  • From narratives → modular blocks
  • From keywords → structured intent 

The shift is structural: Content that performs well in this environment is designed to be extracted, recombined, and attributed.

How AI systems actually use your content

To design for AI usefulness and visibility, you need a basic model of how content is selected and used.

Retrieval favors structure

AI systems segment content into passages and retrieve those independently. That has a few implications:

  • A single section can be selected without the rest of a page.
  • Sections within the same article compete with each other.
  • Clear boundaries (headings, sections) improve AI retrieval.

When structure is unclear, the signal becomes less reliable, even when the topic is relevant.

Generation favors clarity and completeness

After retrieval, content is used to generate an answer. AI systems tend to favor passages that:

  • Answer the query directly.
  • Require minimal rewriting.
  • Can stand on their own.

This is where “low-edit distance” shows up in practice. Content that can be used as-is has an advantage.

Attribution favors distinct, ownable framing

AI systems also decide what to cite. Content is more likely to be attributed when it includes:

  • Defined concepts.
  • Clear frameworks.
  • Language that isn’t interchangeable.

If a section reads like a generic summary, it’s easier to replace with another source.

The 5 core principles of AI-preferred content design

When content is retrieved in pieces, used in generated answers, and selectively attributed, structure becomes the lever. These principles show up consistently in content that gets surfaced by AI systems:

1. Modular by design

Content is more useful when it’s built in discrete units. Each section should:

  • Address a specific question or subtopic.
  • Be understandable without relying on surrounding text.

Long sections that depend on earlier context are harder to reuse in isolation. Modular structure also makes content easier to update, test, and repurpose across surfaces — without rewriting the entire page.

2. Hierarchically structured

A clear hierarchy helps systems understand what each section contains and how it relates to the rest of the page. H2 → H3 → H4 structure should signal:

  • Topic: What the section is about.
  • Intent: What question it answers.
  • Scope: How narrow or specific it is.

Headings should make each section’s purpose immediately clear. When that signal is weak, it becomes harder to match the right section to the right query.

3. Explicit over implied

AI systems rely on what’s stated directly. Make relationships and conclusions clear by:

  • Defining terms when they’re introduced.
  • Stating outcomes or takeaways directly.
  • Clarify cause-and-effect or comparisons, rather than implying them.

If something is important, it should be written plainly. Copy that requires inference is harder to interpret and more likely to be skipped in favor of clearer alternatives.

4. Answer-first formatting

Place the direct answer to the section’s core question at the top, then expand. 

AI systems prioritize passages that resolve a query immediately. When the answer is delayed or embedded within a longer explanation, the relevance of that passage becomes less obvious.

Answer-first formatting requires that the opening lines:

  • Resolve the core question directly
  • Use language that clearly maps to the query
  • Avoid unnecessary setup or context

The rest of the section can then add deeper nuance, examples, or other details that further understanding without changing the core response.

5. Designed for passage-level extraction

Passages compete for selection, both within the same article and across the web.

When multiple sections address the same question in similar ways, they dilute each other. Clear, specific, and well-scoped content “chunks” are more likely to be selected.

You can audit a passage’s usefulness by asking:

  • Is it understandable without additional context?
  • Does it fully answer a single question?
  • Can it be quoted as an answer without any editing?

If the passage needs context or cleanup, it’s less competitive.

Common content patterns that improve AI retrieval and use

These patterns show how structured, answer-first content is applied in practice — making it easier for AI systems to match, extract, and use.

The ‘definition + expansion’ block pattern

Start with a clear definition. Then add detail. This works best for:

  • Concepts.
  • Terminology.
  • Processes.

The definition should establish what something is in a way that can be quoted independently. The expansion then adds context, nuance, or examples.

This pattern helps position your content as a reference point for core concepts — especially when AI systems need a clean, authoritative definition.

The ‘question → direct answer → context’ pattern

AI systems are designed to respond to queries. This pattern aligns your content to that structure.

Order your content as:

  • Question.
  • Immediate answer.
  • Supporting detail.

The answer should resolve the query in one to two sentences, using the same language or phrasing as the question where possible. 

Remaining content can add depth through nuance and edge cases that extend beyond the core answer.

The ‘framed list’ pattern

Lists work best when they’re introduced by a clear framing sentence that tells the reader — and the retrieval system — what the items represent.

  • Follow a consistent structure (e.g., all actions, all criteria, all features)
  • Stay at the same level of detail
  • Clearly map back to the framing sentence

This pattern works especially well for steps, criteria, features, and takeaways.

Well-structured lists are easier for systems to parse and reuse, especially when each item is clearly defined within the context of the list.

The ‘comparison’ pattern

Structure content to make differences explicit. This works well for alternatives (“X vs Y”), tradeoffs, and decision-making criteria. You can use:

  • Side-by-side comparisons.
  • Clear evaluation criteria (price, features, use case, limitations).
  • Direct statements of when to choose each option.

Content that clearly outlines differences is easier for AI systems to extract and reuse in answers that involve evaluation or recommendations.

Get the newsletter search marketers rely on.


Top content design mistakes that limit AI visibility

Most AI surfacing issues come back to content structure. When structure is weak, answers are harder to identify and extract. That tends to show up in the form of:

Overly narrative, under-structured content

Long paragraphs with key points buried inside make it harder to isolate a clear answer. Without strong subheadings to define what each section covers, systems have fewer signals to identify where that answer lives.

Ask:

  • Does this section answer a clear question, or just explore a topic?
  • Is the main point easy to identify in the first few lines?
  • Do the subheadings clearly signal what each section contains?

Vague or non-descriptive headers

Headers like “Overview,” “Introduction,” or “Key Takeaways” don’t provide enough signal about what the section actually contains.

Headings help systems understand what a section covers and how it relates to a query. When they’re vague, the relationship between section and query becomes less explicit.

Ask:

  • Would this header make sense out of context?
  • Does it clearly reflect the question or topic being answered?
  • Could multiple sections on the page use the same header? 

Answers buried mid-paragraph

When the answer appears halfway through a paragraph, it’s harder to isolate as a clean, reusable unit.

AI systems look for segments that clearly resolve a query. When the answer is embedded within surrounding context, it becomes less distinct and more likely to be overlooked or reassembled.

Ask:

  • Is the answer clearly distinguishable from the neighboring text?
  • Does contextual copy clarify or dilute the answer’s main point?

Redundant or repetitive sections

When sections overlap, they compete for the same query and weaken the overall signal. Instead of reinforcing the topic, similar sections can fragment it across multiple passages, making it less clear which one should be selected.

Ask:

  • Do multiple sections answer the same question in slightly different ways?
  • Is each section clearly scoped to a distinct angle or subtopic?

Clear separation improves both retrieval and selection.

How to evolve existing content for AI without starting over

Most teams don’t need to totally rebuild content from scratch. Updating existing content for today’s landscape just requires a few structural changes.

Break content into logical units

  • Identify where natural sections exist and what question each one answers.
  • Split broad or mixed sections so each one resolves a single idea or query.
  • If a section covers multiple points, separate them into distinct sections.

Rewrite for answer-first clarity

  • Move the clearest version of the answer to the top of each section.
  • Remove lead-in language, qualifiers, or examples that appear before the answer.
  • Ensure the opening lines can be understood without relying on the rest of the page.

Strengthen structural signals

  • Make headings specific enough to reflect both the topic and the question being answered.
  • Use formatting (lists, short paragraphs, summaries) to make key points easier to scan and isolate.
  • Check that each section’s purpose is immediately clear from its heading and first sentence.

Introduce distinct framing

Turn generic sections into clearly defined units, like:

Ensure each section covers a distinct angle and does not repeat or overlap with others. This helps consolidate signal and makes it easier for systems to select and attribute the right passage.

The future of content design in AI-mediated search

AI systems are already reshaping how content is surfaced, and that shift will continue as answers become more personalized and draw from multiple sources.

As a result, page-level ranking matters less on its own. Content value is shifting toward contribution — how clearly a piece of content can inform, support, or shape an answer.

The content that performs best will be:

  • Structurally clear, with sections that are easy to identify and extract.
  • Modular, so individual passages can be selected and reused independently.
  • Distinct, with clearly defined ideas that don’t overlap or compete internally.
  • Designed to be selected and used, not just indexed or ranked.

Content that meets these criteria is more likely to be surfaced, reused, and attributed as AI-mediated search continues to evolve.

How to produce content that naturally builds AEO clout

6 April 2026 at 17:00
How to produce content that naturally builds AEO clout

For a long time, links were the primary signal of authority in search. If you wanted visibility, you built backlinks. If you wanted credibility, you earned placements. That still matters — but it’s no longer enough.

In AI-driven search, authority is shaped by how often your brand is mentioned, cited, and clearly associated with a topic. Visibility comes from being referenced in AI-generated answers.

With that shift in mind, the goal is to create content that earns consistent brand mentions and citations — the signals that now drive AEO visibility.

The philosophy driving content that fuels AEO growth

In 2026 organic discovery, authority incorporates entity recognition.

On both Google and LLMs like ChatGPT and AI Overviews, authority is reinforced through:

  • High-quality backlinks.
  • Brand mentions (linked or unlinked).
  • Consistent citations across trusted publications.
  • Clear entity associations (who you are, what you’re known for, and what topics you “own”).

Since LLMs synthesize information instead of ranking pages, you need repeatable, credible mentions across the web to strengthen your brand’s likelihood of being cited or referenced in AI answers. Importantly, you also need to use your owned media to define your brand entity very clearly.

That makes building authority even more critical. Your content will now be battling with even more competition in the form of AI results in the SERP and AI-produced content from other publishers.

The TL;DR is that you need to establish a clear brand and, underneath that brand, create content that’s so valuable that other experts, journalists, creators, and AI systems repeatedly reference your brand when they’re discussing a topic core to your business.

Dig deeper: How to build an effective content strategy for 2026

The principles and formatting of AEO-friendly content

You’ll use many of the same SEO principles as a base for AEO-friendly content. Content aligned with Google’s helpful content guidelines — focused on value and user experience — appeals to the people (and LLMs) discussing these concepts and sourcing experts to validate their positions.

That said, to produce truly AEO-friendly content, you need to incorporate formatting that supports LLM extraction.

Key formatting principles include:

  • Clear definitions: Have short, clean definitions high on the page:
    • “X is…”
    • “Y refers to…”
  • Structured formatting:
    • Use descriptive H2s and H3s.
    • Employ bullet points.
    • Keep paragraphs short.
    • Include direct answers under question-based headers.
  • Explicit context:
    • Avoid vague pronouns and implied references.
    • Remember that LLMs perform better when context is explicit and self-contained.
  • Summary sections: 
    • TL;DR blocks.
    • Key takeaways.
    • FAQs.
  • Entity reinforcement:
    • Brand name.
    • Author expertise and authority.
    • Brand and author credentials.

Get the newsletter search marketers rely on.


The specific objectives for your AEO content to address

If you’re solely focused on AEO, I’d approach your content with these objectives in mind:

  • Be highly citable: Include original data or perspectives a journalist or influencer would use in media like podcasts, expert roundups, contributor columns, or co-marketing content)
  • Be highly quotable: Provide at least one clean, quotable insight.
  • Be specific: Answer specific questions an AI system would try to answer. You can clearly articulate a question your content answers — and answer it verbatim with a section or paragraph in your content.
  • Be clear: Define a topic in an easily extracted manner. 

To address these objectives, it can be helpful to think beyond blog posts to ideate “reference-grade” assets, including:

  • Original research.
  • Data studies.
  • Industry benchmarks.
  • Visual explainers.
  • Definitive guides.
  • Glossaries.

Dig deeper: How to create answer-first content that AI models actually cite

Practical steps to build AEO authority with content

Here’s how to turn those principles into a repeatable process for building AEO authority:

  • Research keywords where bloggers and journalists are searching for references (these keywords often include “statistics” or “reports”). Use Reddit, Quora, X, Ahrefs (Matching terms report), and Exploding Topics among your references.
  • From those keywords, build a list of topics around which your team has the expertise to share valuable insights and perspectives.
  • Research a list of writers and journalists who cover those topics.
  • Find expert resources (either internal or closely connected) and interview them to build a cache of content.
  • Refine and develop that content into contemporary insights using Google Trends and social listening, using timing and a list of audience modifiers to heighten relevance.
    • Example: Get a list of tips from an expert targeted to help hay fever sufferers (niche audience/modifier) get a better night’s sleep (core topic/target) during a particularly bad high pollen count period (relevance).
  • Pitch a group of writers and journalists who cover your theme and/or sub-theme on why this matters right now, and how it’s different from other content they might find to reference.
  • If (or even before) those writers and journalists link to your content, follow them on their social channels to deepen your connection for future opportunities.

Dig deeper: Organizing content for AI search: A 3-level framework

Create content worth referencing

Writing for AEO isn’t at odds with writing for humans. Even from its early days, AEO shared many of the SEO fundamentals derived from appeal to actual users.

That said, there are enough differences with the way LLMs extract and digest content (and the way users ask LLMs for information) that you need to keep specific nuances in mind in your content approach. 

With a clearly defined brand on your owned media, and an understanding of the tenets of AEO and how to address them, you should have a good idea how to leverage your team’s expertise for greater visibility on the AI search landscape.

Guest post outreach in 2026: A proven, scalable process

6 April 2026 at 16:00
Guest post outreach in 2026: A proven, scalable process

Since 2021, I’ve worked on more than 350 published guest posts. In that time, I’ve refined a repeatable guest posting outreach process that consistently drives approvals without ever paying for a placement.

Although guest blogging is becoming more difficult, the basics of personalized guest posting outreach remain the same. If your mindset is to create mutual value, this process will work for you in 2026 and beyond.

Step 1: Build your outreach list

Your outreach list is a collection of the websites you’ll email to offer guest-written content. You can build your list in several ways.

The easiest way to find potential websites is by googling your niche alongside “write for us.”

Plenty of reputable websites openly accept guest posts and have an established approval process you can find online. That’s the exact approach I used to publish an article on G2’s Learning Hub.

Alternatively, search the name of a prominent person in your niche and add keywords such as “guest post,” “guest author,” or similar. Chances are that if a website has published guest posts from someone in your industry, they’ll be receptive to accepting guest posts from you as well.

Browse your competitors’ backlink profile with an SEO tool. In Semrush, Backlinks is one of the SEO tools under Link Building.

To refine your list, verify which websites have previously published content from guest authors. If, however, all articles on a blog are written in-house and you’re not the Beyoncé of your industry, chances are your guest posting pitch will go unnoticed.

Once you’ve gathered a list of sites that potentially accept guest posts, run them by your website quality criteria.

Consider the website niche, top pages, organic traffic over time, countries where the traffic is coming from, authority score, and outgoing backlinks. You can also automate this step with the API of your favorite SEO tool.

Step 2: Find the right contacts

Even the best guest post outreach will fail if you’re writing to the wrong person.

Most people ignore emails that aren’t relevant to them, nor do they forward them to the right colleague.

That’s why you need to do your homework. There’s likely a specific department or person you should be addressing.

Here’s how to find the right person through LinkedIn:

  • Open the company LinkedIn profile and select the People tab.
  • Type relevant keywords into the search bar to filter out profiles. You’re looking for a person who decides what content goes on the blog.

To do this, you can type “content” and browse the results for a content manager, content editor, or similar.

In smaller companies, you can search for “marketing” or “growth” to find who’s the one-person marketing team.

For micro companies, your best contact person might be the founder or co-founder.

  • Use Apollo or Hunter to find the work email of the best contact you find.

Sometimes, you’ll come across companies that have no listed employees on LinkedIn, or their emails are not available. In this case, your only option might be a generic email such as contact@ or support@. For micro companies or in certain niches (typically B2C websites), these emails can still work.

  • Verify all email addresses. Many outreach tools have built-in email verification features.

This step helps you protect your sender reputation and ensures your emails end up in the inbox, not the spam folder.

Step 3: Choose your outreach approach

There are two distinct ways to approach guest posting outreach.

Send out a generic email template with basic personalization

Ask whether the website accepts guest-written content. This way, you don’t invest a lot of time upfront into every pitch and your only focus is on building an outreach list.

As the emails aren’t highly personalized (they usually just include the names of the person and the company), they generate a moderate reply rate. 

To drive results with this approach, you need a large outreach list so you’ll still get enough opportunities to work with at a 3% to 5% reply rate.

Hyper-personalize your emails

The email you send to company A offers something completely different than the email you’re sending to company B. It takes a lot of time to research and tailor your pitch, but it also enjoys a higher reply rate (around 19%, from my experience).

This approach works best when you have a small outreach list or when you’re pitching to prominent websites.

Step 4: Research the right topics

No matter your outreach approach, you usually need to pitch guest post topics. With basic personalization, you suggest topics only to the websites that reply to you. But with the hyper-personalized email approach, you propose topics in the first email you send.

Top-tier websites typically only accept specific types of guest articles. Find the website’s editorial guidelines by googling “[company name] + guest post” and see their requirements.

Let’s look at HubSpot as an example. They’re only publishing marketing experiments, original data analyses, or super detailed tactical guides.

Similarly, writing a guest article for Zapier’s blog requires specific experience. Generic topics won’t make the cut.

Buffer takes things a step further by opening rounds for guest posting under specific themes.

Following each website’s requirements increases your chances of landing a successful pitch. But most websites are open to a broader range of suggestions.

Some editors have a list of keywords or topics they want to target. They may share it with you so you can choose a topic to write on based on your expertise.

Alternatively, you can bring your own guest post ideas. When that’s the case, you can use a keyword gap analysis to uncover relevant topic ideas.

How to do a keyword gap analysis with Semrush

Let’s say you want to pitch a guest article to monday.com. Here’s how to go about it:

  • Go to Semrush’s SEO tools and select Keyword Gap. Add the URLs of Monday.com’s blog along with the blogs of leading competitor brands, then click on Compare.
  • Next, filter out the keywords.

Look only at keywords where competitors are ranking in the top 100 results.

Limit the keyword search volume to 2,000. This filters out broad, highly competitive terms that typically require long-form, comprehensive guides to rank.

  • In the keywords report, choose Missing to see keywords that competitors are ranking for but monday.com isn’t. This is their keyword gap.
  • Look deeper into individual keywords that seem interesting and match your expertise. 

For example, “what is time boxing” has 49% keyword difficulty.

  • In the search bar, add the domain URL to get a personalized keyword difficulty calculation. The goal is to find keywords for which your article has real potential to rank.

After selecting “monday.com,” you see the site has low topical authority for “what is time boxing,” and ranking for it would be very hard.

Looking at “cost management in project management,” the Personal Keyword Difficulty is 60%. While that’s still high, there’s more to consider.

  • Check how your target domain compares against other websites ranking for this keyword. 

Monday.com’s Authority Score (AS) is 67, while the average in the top 10 is AS 52. Despite this being a competitive keyword, with the right content, monday.com has real ranking potential.

  • Double-check the website isn’t targeting this keyword already. Sometimes, the website already has content on a similar topic — they’re just targeting a variation of your keyword.

To do this, use the “site:” search operator and add your keyword into Google search.

In this case, “task priority” came up in the keyword gap analysis. While monday.com doesn’t have an article with this keyword in the H1, it does have very similar content on how to create a priority list or prioritize tasks.

  • Select three to four keywords that would make sense for the website to target. This ensures that the website editors will have enough options to choose from. If you put all of your eggs into one topic idea, it might not land. But three or four ideas increase your chances of success.

Get the newsletter search marketers rely on.


Step 5: Create your extra value proposition

Adding extra value is about what else you can bring to the table besides guest content.

  • Are you an established author in the site’s niche?
  • Do you have a social media following that would be interested in this piece?
  • Are you running a relevant newsletter?
  • Or do you participate in a private community that cares about this topic?

Your extra value proposition is unique to your profile, and different value props can appeal to different websites.

For example, I have 11,000 followers on LinkedIn. When reaching out to a project management tool’s blog editor, I can mention that 54% of my followers are founders, executives, or senior-level professionals in small to mid-sized companies — the very people responsible for managing processes and tools within their organizations.

If I’m personalizing this pitch for a lead-generation blog, I can highlight that 35% of my audience works in the marketing or advertising industry.

Step 6: Prepare your emails

When it comes to your emails, you need to consider the subject line, the email body, and follow-ups.

In simple terms: 

  • The subject line is what gets your email opened.
  • The email body gets you replies.
  • The follow-up gets you a second chance.

According to BuzzStream’s analysis of six million emails, the best-performing subject lines:

  • Have 9-13 words and 71+ characters.
  • Have emojis.
  • Mention the website name (but not the person’s name).
  • Use title case (vs. sentence case).

On to the email body: Keep your emails concise and skimmable. Editors rarely have time for long messages.

Finally: follow-ups. Statistically, the more you follow up, the higher your overall campaign reply rate. Some people reply after the first follow-up, others after the third.

My recommendation? Limit follow-ups to two. A third one feels too pushy.

Step 7: Send your outreach emails

You’ve done a lot of preparation work. It’s finally time to send your emails. Here’s what to consider:

Send days 

An analysis of 85,000 personalized emails showed the best day to send a cold email is Monday, closely followed by Tuesday and Wednesday. These are the days with the highest email open and reply rates.

Send times

The same study suggests you should be sending your emails between 6 to 9 a.m. PT (9 a.m. to 12 p.m. ET). But since most editors are based in different countries, aim to send your email before noon in their local time.

Unsubscribe option

Always give recipients a clear way to opt out of more emails. Without an unsubscribe option, recipients may mark your message as spam. This can damage your sender reputation and reduce future deliverability.

Step 8: Track and adjust

Most outreach tools allow you to track open, reply, and success rates. Let’s break down what each metric tells you.

  • Open rate is the percentage of recipients who open your email. Your subject line, preview text, sender name, and domain reputation directly influence this number.
  • Reply rate is the percentage of recipients who respond to your email. Exclude automatic replies (like out-of-office messages) to avoid inflated performance numbers. Your email body, topic relevance, and positioning drive this metric.
  • Success rate is the percentage of sent emails that result in a published guest post. Your topic selection, communication with the editor, and adherence to editorial guidelines are some of the aspects that influence success rates.

Track these metrics to identify weak points in your outreach campaigns.

After you establish a baseline, run controlled A/B tests. Send different versions of your campaign to similarly sized groups and compare performance. Change only one variable at a time so you can clearly measure its impact.

Test ideas such as:

  • Subject line with an emoji vs. without.
  • First email with an extra value proposition vs. without.
  • Three suggested topics vs. four.
  • One follow-up vs. two follow-ups.

Small improvements across different elements of your campaign can compound into measurable gains in success rate.

Step 9: Build relationships with editors

I mentioned I’ve worked on more than 350 guest articles. But that doesn’t mean they were all published on different websites. When you provide quality, you’re very likely to build lasting relationships that result in ongoing work.

That’s one reason I use keyword gap analysis to choose topics. I target keywords that the website has real potential to rank for. When an article brings meaningful traffic, it becomes much easier to pitch the next one.

To establish lasting relationships with editors:

  • Provide exceptional content: Structure the article around search intent. Create original value with custom visuals, expert quotes, and practical examples. Support the publisher’s internal linking by adding multiple links to other resources on their website. Ensure perfect grammar and spelling.
  • Support the article after publication: Promote it through your social media, newsletter, or community. When appropriate, link to it from other relevant content you write.
  • Be reliable and easy to work with: Communicate clearly, respect editorial guidelines, and meet every deadline.

My guest posting template with 18% success rate

Below is the guest post outreach template that has delivered the strongest results in my campaigns.

Between 2023 and 2025, I sent more than 300 pitches using variations of this template, primarily to content managers at B2B SaaS companies in the marketing and HR niches. It generated a 19% reply rate, and 18% of sent emails resulted in a published guest post.

Subject: Fresh content ideas for [Company Name]

Hi [First Name],

My name is [Your Name], and I’m the [Your Job Title] at [Your Company], a [short company description].

I’m reaching out to see if [Company Name] is open to guest contributions. I have extensive experience in [your expertise area], having worked on projects for brands such as [Brand 1] and [Brand 2].

Here are a few topic ideas I’d love to propose:

keyword: [primary keyword 1], US search volume: [search volume]

[Proposed Article Title 1]

keyword: [primary keyword 2], US search volume: [search volume]

[Proposed Article Title 2]

keyword: [primary keyword 3], US search volume: [search volume]

[Proposed Article Title 3]

To learn more about my background, you can view my [LinkedIn profile link] or review articles I’ve written for [Publication 1], [Publication 2], and [Publication 3].

If the article is a fit and gets published, I’d be happy to promote it to my community of [audience description or size].

Looking forward to your thoughts,

[Your Name]

Guest blogging caveat to consider

Your author profile directly influences your approval rate.

If you’re just starting out and don’t have a portfolio of published work, editors will hesitate to approve your topics. Start by reaching out to small or mid-sized industry blogs.

As you build your portfolio, pitching becomes easier. Publishing on recognized industry websites and creating content that drives measurable results strengthens your credibility and improves your success rate over time.

Bottom line: Invest in your author profile. That’s your biggest asset for successful guest blogging.

The latest jobs in search marketing

3 April 2026 at 22:58
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Description Ready to move beyond execution and help shape the strategy behind high-impact link building? Rankings.io helps law firms compete in some of the toughest search markets in the country. Off-site SEO is central to how we do that, and we’re looking for an experienced Link Building Manager to help lead it. This role sits directly […]
  • This role ensures Inflow’s expertise is translated into business results through proactive communication, clear storytelling, strategic growth management, and relentless focus on client satisfaction and retention. About Inflow Inflow is an established agency with a 17 year track record of providing top-tier results to our clients. Our team is our greatest asset and we strive […]
  • About the Role WebFX is seeking an entry-level candidate for our Marketing team! Our ideal candidate has a bachelor’s degree (or will soon have one!), a track record of strong academics, and is excited about all things marketing and client relationship-building. Related experience is awesome to have, but never required – we’ll train you on […]
  • Position Summary: The Senior Content & Growth Strategy Manager plays a critical role at the intersection of Marketing, Digital Engagement, and Commercial Growth. This position is responsible for translating consumer intent, market dynamics, and brand objectives into a coherent content ecosystem that drives measurable business impact. Acting as a strategic bridge between marketing, product, and […]
  • Job Description Ushur delivers the world’s first Customer Experience Automation™ platform purpose-built for regulated industries. Designed to enable seamless self-service, Ushur infuses intelligence into digital experiences to create more delightful and impactful customer interactions. Backed by robust compliance-ready infrastructure and enterprise-grade guardrails, Ushur powers vertical AI Agents tailored for healthcare, financial services, and insurance. With […]
  • Job Description Digital Marketing Specialist Salt Lake City, UT | Hybrid | $70,000 / year + discretionary bonus About the Role We are a fast-growing company looking for a driven, well-rounded, full-time Digital Marketing Specialist to join our expanding team. This is an exciting opportunity for a self-starter who thrives in a dynamic environment, embraces […]
  • Job Description Healthcare is increasingly unaffordable for many Americans. For those who can afford it, they are in a health insurance system that has become more confusing, restrictive, and lower value with each passing year. Here at WeShare our mission is to bring better healthcare to America at a better price. We offer consumers a […]
  • Job Description Salary: 60K – 70K What You’ll Do This is a strategic mid-level digital marketing role focused on driving measurable growth through multi-faceted campaign management. You will own the full lifecycle of multi-platform selfservice digital campaigns, from strategic planning and execution to optimization and performance analysis. This role requires a data-driven professional who can […]
  • Job Description Position: Digital Marketing Specialist Location: Schaumburg, IL Years of Experience: 3-5 Years About RTM: RTM Engineering Consultants is an MEP, Civil and Structural engineering firm that goes beyond the conventional consulting role. We forge deep partnerships with our clients by aligning with the goals, processes and people at each organization. By integrating our […]
  • Job Description Digital Marketing Specialist Location: Remote (United States) Employment Type: Full Time | Exempt Reports To: Director of Marketing Technologies Work Authorization: Must be authorized to work in the U.S. without sponsorship Role Summary The Digital Marketing Specialist supports the execution of digital content across Catalyst Acoustics Group’s portfolio of brands. This role will […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Description: Balance Health, a national leader in podiatry, is seeking a dynamic and analytical Paid Media Manager to orchestrate paid digital marketing. The Paid Media Manager will be responsible for driving new patient volume across paid channels from Google Ads (Search, Pmax, Demand Gen, etc.) to Meta Ads (Facebook, Instagram) to paid marketplaces (ex. ZocDoc) […]
  • Job Description Benefits: 401(k) Bonus based on performance Competitive salary Dental insurance Free food & snacks Health insurance Opportunity for advancement Performance Marketing Specialist Irvine, CA Working Capital Marketplace (WCMP) About WCMP Working Capital Marketplace is a fast-growing financial services company focused on helping small business owners access the capital they need to scale. We […]
  • Description: Paylocity is an award-winning provider of cloud-based HR and payroll software solutions, offering the most complete platform for the modern workforce. The company has become one of the fastest-growing HCM software providers worldwide by offering an intuitive, easy-to-use product suite that helps businesses automate and streamline HR and payroll processes, attract and retain talent, […]
  • Whizz is an innovative company offering rental, lease‑to‑own, and subscription models for electric bicycles for couriers and last‑mile delivery services. The company strives to increase access to mobility for all participants in delivery platforms, optimise transportation time and costs, and create reliable, eco‑friendly solutions for urban logistics. Whizz continues to expand its presence in key […]
  • Your Role in Helping Us Shape the Future U.S. News & World Report is a multifaceted digital media company dedicated to helping citizens, consumers, business leaders and policy officials make important decisions in their lives. We publish independent reporting, rankings, data journalism and advice that has earned the trust of our readers and users for […]

Other roles you may be interested in

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Senior Brand Insights Manager, Derflan Inc (Remote)

  • Salary: $181,400K
  • Own and evolve global brand tracking programs across 11+ international markets
  • Lead quarterly brand pulse initiatives across 13+ locales, ensuring rigor, consistency, and actionable insights

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Digital Marketing Manager 10x Health System (Scottsdale, AZ)

  • Salary: $110,000 – $120,000
  • Measure and report on the performance of all digital marketing campaigns against goals (ROI and KPIs).
  • Document and streamline digital marketing processes to scale the team and improve operations.

Marketing Manager – SEO & GEO, Care.com (Hybrid, Austin Texas)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Digital Marketplace Manager, Venchi (Hybrid, New York, NY)

  • Salary: $120,000 – $130,000
  • Define and execute channel-specific and cross-marketplace strategies, balancing brand positioning, commercial performance, and operational efficiency.
  • Manage Amazon advertising across Sponsored Products, Brands, and Display campaigns.

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Note: We update this post weekly. So make sure to bookmark this page and check back.

Google is fixing a Search Console bug that inflated impression counts

3 April 2026 at 20:56
Google Search Console bug

Google is fixing a long-running Search Console bug that inflated impression counts. As the fix rolls out, reported impressions will decrease.

What happened. A logging error caused Google Search Console to over-report impressions starting May 13, 2025. Google today updated its Data anomalies in Search Console page:

  • “A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”

A Google spokesperson told Search Engine Land:

  • “We identified a reporting error in Search Console that temporarily led to an over-reporting of impressions from May 13, 2025 onward. Bug fixes are being implemented to ensure accurate reporting.”

What’s changing. Google is deploying fixes that will change how impressions are recorded and reported. As the rollout continues, you’ll likely see a drop in impressions in the Performance report. Clicks and other metrics aren’t affected.

The timeline. The issue began May 13, 2025 and persisted until now. Google said the correction will take several weeks to fully roll out across reporting.

Why we care. If your Google Search Console impressions change in the coming weeks, it will likely be due to this bug fix.

If you can’t say what problem your brand solves, AI won’t either

3 April 2026 at 19:00
The compressed customer journey is exposing your search strategy problem

Customer journeys are collapsing into a single moment of evaluation. David Edelman recently described this shift as the convergence of behaviors that used to happen separately.

As decisions compress, brands need to be clearer about what they are trying to solve for the customer. Many organizations are increasing activity instead, without sharpening the underlying strategy.

The shift behind the compressed journey

Edelman’s argument, outlined in his March 2026 Think with Google essay, is built around a shorthand developed by Boston Consulting Group and Google: streaming, scrolling, searching, and shopping.

His central insight is that generative AI has snapped these four behaviors together so tightly that the old model — awareness, then consideration, then purchase, each in its own tidy lane — no longer describes reality. Consumers bounce between platforms, multitask, and shift fluidly between entertainment and intent.

The data point that stopped me cold: people are now asking AI-enabled search engines much longer, richer, more emotionally descriptive queries. Not keywords. Paragraphs. They share context, constraints, preferences, and urgency. 

The AI then breaks those queries into multiple search streams and synthesizes results in real time. What once required dozens of browser tabs — hours of work — now takes seconds.

Edelman draws two implications from this. 

  • The fundamental unit of competition has changed. Brands are now evaluated as solutions to specific situations, not as products within a category.
  • The familiar demand framework — create demand, capture demand, and convert demand — must be treated as simultaneous, not sequential. You can’t do them in order anymore because the journey doesn’t proceed in order.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

Enter Pogo — and Kelly’s uncomfortable truth

Walt Kelly gave us Pogo, the philosophical possum of Okefenokee Swamp, whose most celebrated utterance was the 1970 Earth Day poster declaration: “We have met the enemy, and he is us.”

Kelly’s most persistent target was not any external villain, but the human tendency to mistake activity for progress. His characters were always busy — scheming, planning, campaigning, reorganizing — and almost never clear on why.

Another line often attributed to him captures it just as well: “Having lost sight of our objectives, we redoubled our efforts.”

Read Edelman’s argument through that lens, and the pattern becomes harder to ignore. He describes brands racing to keep up with compressed customer journeys — more content, more specificity, more “answer audits,” more presence across platforms and formats. The advice is sound. 

But without clarity about what a brand is actually trying to solve for the customer, more content and more channels are just Pogo’s swamp creatures running faster through the same mud.

Dig deeper: Why clarity now decides who survives

The compression trap: When speed substitutes for clarity

Edelman is right that the journey is compressing. But compression can serve two different masters. 

For brands with crystal-clear positioning — brands that genuinely know what problem they solve and for whom — compression is a gift. It helps a consumer build confidence faster. 

Warby Parker, which Edelman cites approvingly, is a clean example: its home try-on program, transparent pricing, and frictionless returns all express a single, coherent answer to a specific question: “Can I trust buying glasses without trying them in a store?” Every element of that brand experience is aimed at one objective.

For brands that lack that clarity — brands that have accumulated messaging layers over years of campaign-by-campaign marketing — compression is a disaster. The consumer’s AI-enabled query now synthesizes everything a brand has ever said across every channel, every format, every platform. 

If those signals are inconsistent, contradictory, or simply incoherent, the synthesized answer will be a muddle. The consumer will move on. In Pogo’s swamp, the creature that runs fastest without knowing where it’s going simply reaches the wrong destination sooner.

Edelman gestures at this when he writes that brand should be understood as “the sum of signals that make a company recognizable as a solution.” 

He’s right. But I’d push harder: the compression of the customer journey isn’t primarily a technological problem. It’s an objectives problem. 

Most brands can’t clearly articulate, in a single sentence, what specific situation they are the best answer to. If you can’t say it plainly, AI certainly can’t infer it.

Dig deeper: Why AI availability is the new battleground for brands

Get the newsletter search marketers rely on.


Pogo would recognize the funnel debate immediately

One of Edelman’s shrewder observations is that some of his clients have constructed a “false trade-off between brand and performance.”

Marketing departments argue over budget allocations between brand-building and demand generation as though they are fundamentally separate activities. This is, as Kelly’s characters would say, a very impressive argument that completely misses the point.

Kelly spent years satirizing exactly this kind of internal organizational warfare — committees forming to study committees, campaigns launched to counteract the confusion caused by previous campaigns. 

Organizations are often earnest and busy, and just as often distracted by their own processes. The brand-versus-performance debate is the marketing equivalent of explaining why two teams can’t collaborate because their mandates are structured differently.

In a compressed journey, brand is performance.

  • The clarity of a brand’s positioning determines whether it surfaces as the right answer to a specific query.
  • The quality of its content determines whether it captures demand at the moment of confidence.

These are the same thing viewed from two angles. 

The brands winning in Edelman’s compressed journey world — Nike, Glossier, IKEA, Warby Parker — don’t appear to be having this argument internally. They have simply decided what problem they solve and built everything around that answer.

Dig deeper: Brand perception: How to measure and shape it

The ‘answer audit’ is only half of the solution

Edelman recommends something he calls a “recurring answer audit”: examine what a consumer would actually encounter across social discovery, video search, retail listings, and AI assistants for their most common customer scenarios. Gaps and inconsistencies, he says, quickly become visible.

This is excellent advice. It’s also, if I’m being blunt in the spirit of Kelly, only half the medicine. An audit shows you where your signals are inconsistent. It doesn’t tell you what they should be consistent about. 

You can audit your way to a perfectly coherent set of messages that still fail to answer any real consumer question, because the messages were never designed around actual consumer situations in the first place.

You need to audit your objectives. What, precisely, is your brand the solution to? Not the product category. Not the feature set. The actual situation.

The specific tension in a person’s life that this brand, and not a competitor, is best positioned to resolve. Until that question is answered with unambiguous clarity, the answer audit is tidying the swamp without draining it.

Dig deeper: How to apply ‘They Ask, You Answer’ to SEO and AI visibility

What Edelman gets completely right

None of this is meant to diminish what Edelman has written. On the contrary, his framework for thinking about the compressed journey is the most coherent I’ve seen in years. 

Three of his observations deserve to be tattooed somewhere visible on the forearms, wrists, hands, necks, and behind the ears of every marketing professional.

‘Streaming and scrolling create possibility. Searching structures choice. Shopping happens wherever confidence peaks.’ 

That’s not just a description of a media landscape. It’s a theory of consumer psychology. Confidence is the triggering condition for a purchase. If you’re optimizing for impressions without asking whether those impressions build confidence, then you’re very busy going nowhere.

Brands must shift from ‘product language’ to ‘solution language.’ 

This sounds simple and is, in practice, revolutionary. The default mode of most brand organizations is to lead with what they make. 

Edelman says lead with the situation you resolve. That is a fundamental reorientation of how marketing is conceived and executed.

‘Are you the customer’s solution? Will they know it?’ 

Two questions. The first is a strategy question. The second is an execution question. Most marketing fails by answering the second question without having honestly answered the first.

Dig deeper: The authority era: How AI is reshaping what ranks in search

We have met the enemy

Kelly’s Pogo ran for 25 years, and the swamp never did drain. The characters were charming, the satire was sharp, and the folly continued because the creatures were incapable of distinguishing between effort and progress. Kelly found that funny.

Marketing history, filled with elaborate, energetic, and expensive campaigns from brands that no longer exist, is less amusing.

Edelman has given us a useful map of the compressed customer journey. It’s fast, complex, AI-mediated, and it rewards clarity above all else. What he understates — though it runs beneath the surface of his argument — is that compression is also a reckoning.

Brands built on accumulated momentum, legacy awareness, and category inertia will find that a faster journey exposes their vagueness more brutally than a slower one ever did.

The compressed customer journey demands better thinking. And better thinking, as Pogo understood, begins with recognizing that the problem isn’t out there in the swamp. It’s in here — in the planning meeting, the brand brief, the objectives slide that everyone in the room suspects isn’t quite right, but no one challenges.

With apologies to Pogo, “We have met the enemy of the compressed customer journey. And it’s our inability to clearly say what we are actually for.”

Strategy is the new keyword: What drives paid search performance now

3 April 2026 at 18:00
Strategy is the new keyword: What drives paid search performance now

Over the course of my three-decade career, the keyword drove paid search. Today, it’s one of many signals. Strategy is what determines performance.

Keywords were what you researched for weeks, then built your strategy around based on what you uncovered or hypothesized. You managed everything from bids to matched search terms to negatives and the audiences you targeted. Your career was built and measured by how well you structured around a keyword.

Paid media has always been deeply tactical, with Google driving the majority of search. You were methodical about placements, audiences, bids, headlines, extensions, and keyword-stuffed URLs.

This model worked. It gave practitioners the control they needed to get results.

You could see which search queries triggered ads and what they cost. If there was value, you expanded or doubled down. You might over-segment ad groups by theme or build campaigns around keyword audiences, then layer in modifiers and match types to drive 1200% ROAS.

What changed across platforms

Advertising has converged on a single structural shift: AI, or more precisely, automation built into the platforms. These systems now handle targeting, bids, and creative assembly that practitioners used to manage manually.

The keyword hasn’t disappeared. It’s moved from the primary optimization lever to one signal among many that platforms use to deliver ads based on user behavior and the auction.

On Google, AI Max for Search is the clearest example. It’s not a new campaign type. It’s an optimization layer, similar to Smart Bidding, that changes how keywords function inside a search campaign. Google’s AI uses your existing keywords, copy, and landing pages, including H1s and H2s, as signals rather than instructions to find and serve ads.

Google reports that advertisers using AI Max see 14% more conversions at a similar CPA or ROAS, with campaigns using exact and phrase match seeing lifts of up to 27%. Pair it with Performance Max across Search, Shopping, YouTube, Display, Discover, Gmail, and Maps, or Demand Gen for upper-funnel awareness, and the system expands further.

Dig deeper: Google Ads no longer runs on keywords. It runs on intent.

The new primary levers

When I say strategy is the new keyword, I’m not speaking in abstractions. I’m saying there are specific inputs that now determine where your ads show up, who sees them, and whether they convert. These inputs have largely replaced the keyword list in paid media as the highest-leverage control.

The distinction matters. Strategy dictates the activity needed to achieve your goal and vision. Tactics are the execution. What’s shifted is that platforms now handle the tactics, and our job is to define the strategy that guides them.

Conversion data quality, including server-side tracking, has become the most important input in any account. Google’s Smart Bidding and other platform optimization systems depend on conversion or event signals to learn and improve.

You can prioritize from all to one, which conversions matter more, whether it’s a lead from a high-value market versus a newsletter sign-up, or a new customer versus a returning one. These distinctions used to be handled through keyword segmentation and bid modifiers. Now, in a small way, they’re handled through strategic conversation, where value is assigned or determined at that point.

First-party data, customer lists, CRM data, website behavior, and offline imports have become the equivalent of keyword research. The richer and cleaner the data you feed these systems, the better they perform. It’s less about search volume and more about understanding your own customer data, making sure it’s structured properly, and connected to the platforms you advertise in.

Creative is a beast. It’s moving from a production deliverable to a strategic signal.

For Demand Gen, Display, and Meta, your creative, functionally speaking, is your targeting. Platforms read your images, video, and copy to determine who sees your ads. Google AI Max generates headline and description variations based on your landing page content, your H1s, H2s, and so on.

The strategic questions, what themes resonate with which segments, what visual approaches drive action at different funnel stages, and what messaging frameworks allow AI to generate variations, now carry the weight the keyword used to.

Landing page and website quality have become paid media inputs, not just a thing for UX or CRO. AI Max reads your page to determine what queries to match and which headlines to generate. Final URL expansion in AI Max and Performance Max sends users to the page AI deems most relevant. Poor post-click experiences, thin content, and slow load times can tie back to lower conversion rates.

All of this limits AI’s ability to serve your ads.

Dig deeper: In Google Ads automation, everything is a signal in 2026

What it means for practitioners

Our roles have shifted.

The most valuable work is no longer managing keyword lists or adjusting manual bids. I have strong opinions on that, but I’ll ask you, what else could you be doing with your time, instead of manually adjusting bids for thousands of keywords?

It’s the strategic framework that AI systems operate within: ensuring data quality, defining creative strategy, building measurement into your teams, and knowing when the LLM is wrong and you, as an SME, need to adjust course.

The job of subject-matter experts is to guide the machines. That guidance takes the form of conversion architecture, audience signal quality, creative frameworks, and brand guardrails, rather than keyword lists and bid sheets.

This means investing time in understanding how:

  • These systems work.
  • Platforms learn.
  • LLMs prioritize.

It’s the pros and cons we choose to emphasize — the signals we prioritize. It means building robust first-party data, developing frameworks across audiences, creative, and UX, and feeding that into AI to enhance. It means accepting that the keyword era is giving way to something fundamentally different.

The practitioners who treat strategy as their primary lever, who invest their energy in architecture and design rather than lever-pulling, will be best positioned as this shift continues.

The keyword list isn’t gone. It’s no longer the center of the work. Strategy is.

Dig deeper: 4 times PPC automation still needs a human touch

Building high-ROAS ecommerce search campaigns in Google Shopping and Amazon Ads

3 April 2026 at 17:00
Building high-ROAS ecommerce search campaigns in Google Shopping and Amazon Ads

Paid search is often the highest-leverage ecommerce growth channel, delivering strong conversion rates and efficient spend when structured effectively.

Google Shopping and Amazon Ads capture high-intent demand while generating the data needed to scale it. These platforms connect search queries directly to revenue, enabling you to identify which terms drive sales and allocate budget accordingly.

The real challenge is organizing campaigns to act on that signal.

Why paid search works so well for ecommerce

Paid search performs differently from other channels because it combines two advantages: intent and data.

  • Intent: Google and Amazon are search-driven environments. When someone searches for a product, they’re signaling exactly what they want. There’s no inference required, no audience modeling, and no interrupting someone mid-scroll. You’re providing the answer to a question the customer is already asking.
  • Data: Both Google Shopping and Amazon Ads provide keyword-level revenue data that most other advertising platforms can’t. You can see which search terms generated sales, at what conversion rate, and at what cost. Amazon goes further, offering clearer and more direct revenue visibility at the product and category level.

Together, these create a powerful feedback loop. Search terms tied to revenue let you shift spend toward higher-converting queries, improving ROAS over time. On Amazon, this loop extends further—stronger conversion rates can improve organic rankings, lowering future acquisition costs.

Success in search campaigns depends on building multi-funnel structures. The concept is consistent across platforms, but implementation varies by campaign types, settings, and bidding strategies.

The architectures outlined below use wide-net, low-cost discovery campaigns to map the full search landscape, then funnel high-intent, proven converters into dedicated performance campaigns with appropriate bids. The result: stronger ROAS, improved rankings, and more scalable growth.

Dig deeper: Ecommerce PPC: 4 takeaways that shape how campaigns perform

Google Shopping: The priority sculpting method

The priority sculpting method is based on Martin Roettgerding‘s approach, with adaptations over the years. It uses a three-layer campaign structure to route keywords into different campaigns based on performance.

This lets you control spend on discovery keywords and maximize investment in high-performing, high-intent terms. The key is Google Shopping priority settings — “high-priority” campaigns serve first at lower bids.

Layer 1: Brand

  • The goal is to capture branded search traffic.
  • This layer uses a Performance Max campaign and can also use standard Shopping.
  • It remains assetless to keep it focused on Shopping inventory and prevent bleed into Display and YouTube.
  • It’s set with a high ROAS target, as PMax tends to go after brand traffic naturally, especially when set with a high target ROAS.
  • Alpha terms are negatived in this campaign, as they may also have high ROAS.

Layer 2: Catch-all

  • The goal is to cast a wide net, test search terms cheaply, and generate conversion data.
  • This layer uses standard Shopping with a high-priority setting to catch non-branded traffic.
  • Bids are kept low to control costs.
  • Brand terms and alpha terms are negatived using a negative list.
  • Over time, low-performing terms are also negatived once they’ve been tested and failed.

Layer 3: Alpha

  • The goal is to dedicate budget to best-performing terms and generate strong ROAS.
  • This layer uses standard Shopping with a low-priority setting and high-ROAS bidding settings.
  • By negating converted terms, or alpha terms, in the catch-all campaign, those queries fall through to this campaign, where you bid aggressively on what’s already working.
  • Brand terms can also be negatived if needed.

Dig deeper: 6 Google Ads mistakes that hurt ecommerce campaigns

The key considerations in this structure include the following:

Routing logic using negatives

The system relies on routing logic: Google’s priority settings determine which campaign serves a query first. Negative keywords in the catch-all push proven converters into the alpha, where bids are higher and budget is protected. At the same time, non-alpha terms run through high-priority campaigns at the lowest possible bids.

The method lives or dies on weekly search term negation. Two actions are done regularly:

  • Negate non-converting terms in the catch-all. A good rule of thumb is over 20 clicks and zero conversions, these terms are negated. We’ve tested them, and removing them frees up the budget for other search terms. Note that this requires consideration before negating. If a keyword is highly relevant, you might want to let it run longer.
  • Negate converted terms (alphas) from the catch-all so they fall through to the alpha campaign. Over time, the alpha accumulates a curated list of proven terms bid on aggressively, while the catch-all keeps finding new ones cheaply. It’s a compounding system.

Shared budgets

Shared budgets are critical. Layers 2 and 3 should work on a shared budget.

The system works only if they run together, because each query needs to be sculpted through the system. It won’t work with separate budgets because if the budget on the catch-all high priority runs out, then the alpha would be the first contact, and the query would likely show on the alpha (at a higher bid), even though it’s not an alpha.

SKU separation

The system is designed to run across a unique set of SKUs. All three layers should target the same set of SKUs. It’s recommended to start with all SKUs to begin with and then build out from there.

Products that get buried in the main campaigns or operate at a different margin tier can be peeled off into their own mirrored catch-all/alpha pair, ring-fencing their budget. Only do this when there’s a clear reason. More campaigns mean more overhead and more fragmented data.

Feed quality

It’s important to optimize the feed, as Google heavily relies on titles mainly for understanding the context of the product and which keywords to serve it.

Get the newsletter search marketers rely on.


Amazon Ads: The multi-tier campaign architecture

Amazon’s campaign structure is more advanced than Google Ads and offers several advantages.

Amazon typically delivers higher conversion rates and more conversion data. Ad spend also drives both conversion rates and rankings, with a clear, measurable link between ad spend and organic ranking.

Ads drive traffic, traffic drives conversions, and conversion rate drives organic rank. That makes Amazon Ads an investment in organic search.

Google Ads campaigns run across the whole catalog. On Amazon, you build campaigns at the SKU level, typically one SKU per campaign.

The structure uses three campaign tiers: research, ranking, and performance. Each has a distinct goal and is managed by adjusting advertising cost of sale (ACOS) targets to reflect different profitability goals.

Tier 1: Research 

  • Campaigns use broad and phrase match keywords, along with automatic targeting.
  • The goal is to cast a wide net and generate keyword ideas and variations.
  • ACOS tolerance is relatively high, since the goal is data, not profit.

Tier 2: Performance

  • Campaigns use exact match keyword targeting.
  • The goal is profit, with a competitive ACOS target below break-even.
  • Move proven converters from the research tier into exact match campaigns. Run your best keywords at efficient bids to maximize returns on what’s already working. This mirrors the alpha campaign in Google Ads.

Tier 3: Ranking or exposure

  • Use single-keyword campaigns (SKCs) with exact match—one keyword per campaign.
  • The goal is usually ranking, though it can shift over time.
  • For ranking, set aggressive bids with high ACOS tolerance (often 50%+). Push volume through high-value keywords to drive top organic positions. Once you reach positions 1–3 organically, pause those keywords.
  • Ranking campaigns are debated. If you’re already ranking, there’s no need to pay for visibility you get for free.
  • This layer doesn’t exist in Google Ads, where ad spend doesn’t influence rankings.

Dig deeper: Why your Amazon Ads aren’t delivering: 6 critical issues to fix

The key considerations in this structure include:

Bidding to an ACOS lever

With Amazon Ads, we bid toward an ACOS target. ACOS is the advertising spend as a percentage of revenue. Because Amazon data is so clean and conversion rates are high, we can calculate our bids to drive a certain ACOS.

The ACOS-based bidding formula: 

  • Target bid = (Revenue per click) x Target ACOS

Implementing ACOS bidding can be automated using software like Scale Insights. Different campaign tiers can be assigned different ACOS targets, and CPCs can be adjusted daily by the software.

Keyword routing

Similar to Google Ads, keywords are funneled through from research campaigns into performance or alpha campaigns. This can be done manually or automatically with Scale Insights using an import rule. 

The concept is very similar in that keywords that shine get imported down the funnel, while non-performing keywords are phased out through testing.

The conversion rate signal

If a product’s conversion rate is below the market average on a given keyword, more spend will not likely improve its rank. Amazon usually surfaces the better-converting product. 

The correct response is to fix the underlying issue: price, listing quality, imagery, or the product itself. Most advertisers skip this step and keep spending into a hole.

The ranking cannibalization rule

There are two strong views on ranking and cannibalization. Some argue that once your product ranks highly for a keyword on Amazon, you should reduce or stop ad spend. If you’re ranking organically, you can save on ads.

On the other hand, if a keyword performs well with strong ROAS, having two listings can outperform one. It increases your chances of a click. Ads also typically appear above organic listings, giving you higher placement.

Whichever view you take, the three-tier method lets you drive rankings through SKCs, then reduce or stop ad spend once you rank, if you choose.

How Google Shopping and Amazon Ads compare for ecommerce

The underlying logic for advanced campaign setup is the same across Google Shopping and Amazon Ads, with key differences beyond the core structure.

Google Shopping (Priority sculpting)Amazon Ads (Multi-tier architecture)
Similarities– Route queries to campaigns via priority and negatives.
– Discover converting terms in a catch-all at a low cost.
– Graduate proven terms to alpha with high tROAS.
– Regular search term reviews, negatives, and alphas.
– Route keywords across research → ranking → performance.
– Discover new keywords in broad, phrase, and auto campaigns.
– Graduate proven terms to exact match for profitability.
– Regular search term reviews, negatives, and imports to lower funnel.
Differences– Run across the whole feed, separate high-margin products for ring-fenced budgets.
– ROAS-based bidding.
– Product feed determines search term targeting, and the advertiser is unable to select.
– Campaigns built at the SKU level rather than across the whole catalog.
– ACOS-based bidding.
– Search terms selected by advertiser.
– Ads drive rankings, and you can save budget by monitoring organic rankings.

Dig deeper: 5 reasons Amazon Ads is better than Google Ads for ecommerce



Which platform is right for your ecommerce strategy

Like all good answers, it depends heavily on your business and your goals. Both have advantages and disadvantages. We can say that:

  • Amazon Ads often perform better, delivering higher conversion rates and faster ranking and sales when intent is strong.
  • Google Ads is better for long-term brand building. It offers broader reach, potentially lower costs, and drives traffic to your own website, where you retain customer data.

The ideal is to run these together. Many brands may launch on Amazon and grow over to their own platforms and utilize Google Ads.

Paid search for ecommerce is probably the most effective advertising avenue you can explore. Both platforms offer significant opportunities when implemented properly. Each platform has pros and cons, and I would recommend further exploring the details in these campaign structures and deciding on the right implementation for your business.

Why AI search is your new reputation risk and what to do about it

3 April 2026 at 16:00
Why AI search is your new reputation risk and what to do about it

It used to be that Google searches opened up a world of questions. You searched, sifted through links, and came to your own conclusion.

Today, AI Overviews, ChatGPT, Perplexity, and other AI platforms compress multiple sources into a single, synthesized response. In the process, nuance is flattened, and certain viewpoints can be overrepresented.

This marks a fundamental shift in online reputation management. Search engines now shape the information they surface. The result is a rise in zero-click behavior, where users accept AI-generated answers without visiting underlying sources.

For brands, that changes the stakes. Visibility no longer guarantees influence. Even a No. 1 ranking can be bypassed if the narrative tells a different story.

AI narrative formation: How AI systems deliver users their answers

AI search engines now follow a new pattern for delivering answers. For the sake of this article, we’ll call it AI narrative formation. Here’s how it works.

Source pooling

AI systems pull from a wide range of sources. While you might expect trusted, peer-reviewed content, they often draw from Reddit, YouTube, review platforms, complaint forums, and social media sites like Instagram and TikTok.

Signal weighting 

Not all sources carry equal weight. A single trusted source can be outweighed by a large volume of lower-quality content. For example, a highly active Reddit thread filled with negative reviews may outperform a fact-checked source like Wikipedia.

Narrative compression

AI condenses dozens of inputs into a short, digestible summary. In the process, nuance is lost, and fringe cases can become dominant themes. A complex reputation may be reduced to: “Users say this company is not trustworthy.”

Continued reinforcement

These summaries don’t stay contained. They’re screenshotted, shared, and repeated across platforms. Those repetitions become new inputs, reinforcing the same narrative in future AI outputs.

Dig deeper: The authority era: How AI is reshaping what ranks in search

How a finance company’s solid reputation unraveled in AI search

To see how AI narrative formation works in action, let’s look at a use case.

My company recently worked with a finance organization to repair its online reputation. For this example, we’ll call it Company X.

Problems emerged for Company X with the rise of Google AI Overview. Previously, under traditional SERPs, Company X had a solid reputation. Users searching Google for reviews would find a 4.2 rating on Trustpilot, a strong company website with employee bios, and numerous positive blog reviews from trusted sources.

Google AI Overview changed that. How? By resurfacing an old Reddit forum centered on negative complaints about Company X.

When users asked Google, “What are opinions like about Company X?” AI Overview delivered a clear answer: “Company X has mixed reviews, with specific complaints regarding customer service.” But those customer service issues were resolved nearly a decade ago.

AI Overview pulled multiple reviews from that Reddit thread, combined them with strong negative phrasing, and factored in the lack of structured positive content to form a semi-negative impression. A new perception of Company X was created.

Get the newsletter search marketers rely on.


Why AI search amplifies reputational risk

We can dig deeper into how AI impacts reputational risk. Consider the following:

  • How negative AI narratives spread: In traditional search, users had to dig for negative results. With LLMs, those results can surface instantly, even when they’re defamatory or incorrect.
  • Hallucinations and misinformation: Most users are now aware of AI hallucinations, but they aren’t always easy to spot. Making matters worse, LLMs can present incorrect claims or factual inconsistencies with confidence.
  • The snowball effect: As discussed in narrative reinforcement, AI-generated answers get screenshotted, shared, and repeated across platforms. That repetition builds momentum, creating challenges ORM firms now have to manage.

A hard truth has emerged in ORM: The most accurate claim doesn’t rise to the top. The most repeated claim does.

Dig deeper: Generative AI and defamation: What the new reputation threats look like

A step-by-step guide to auditing AI-generated narrative formation

Let’s walk through another case to see how an AI-generated narrative can be audited.

CEO X is the founder of a SaaS company. He has an ongoing thought leadership presence and a strong reputation in his industry.

On a recent podcast appearance, one quote was taken out of context and aggregated across several platforms. The quote was framed as an opinion rather than a fact. Blog posts were written, and Instagram Live reactions spread online.

In no time, ChatGPT and Google AI Overview turned CEO X into a controversial figure.

Here’s a step-by-step guide to approaching that reputation management crisis.

Step 1: Mapping queries

We begin by identifying what search engines are saying about CEO X. We ask ChatGPT and Google AI Overview questions such as “What did CEO X say?” and “What is CEO X’s current reputation?” This helps us analyze the issues.

Step 2: Capturing outputs

We identify the claims associated with CEO X. Google AI Overview and ChatGPT describe CEO X as a controversial figure who recently made comments in poor taste. The narrative formed across both platforms is trending negative.

Step 3: Delving through sources

Next, we analyze the sources AI Overviews and ChatGPT rely on. We look for whether they’re outdated, repetitive, or low quality. (In the case of Company X, the latter two apply.)

Step 4: Analyzing the narrative gap

We identify the gap between AI’s narrative and reality. 

  • What are CEO X’s actual views? 
  • What was the context of the quote? 
  • And what has their reputation been up to this point?

Step 5: Correcting and replacing sources

The final step is to replace or respond to those negative sources. Claims can be addressed directly on Reddit, Instagram, or other platforms spreading the narrative. Structured explanations should also be published through FAQs and policies, while strengthening third-party validation.

Dig deeper: How AI changes how we respond to negative reviews and comments

A new mindset: Reputation is now an output

Focusing solely on SEO rankings is no longer enough. We need to think in terms of narrative shifts and framing. That also means thinking in terms of inputs and outputs. 

Users aren’t evaluating individual pages. They’re engaging with AI-generated answers. Rather than managing what users find, we need to manage the answers AI systems deliver. That means strengthening what those systems rely on:

  • Publishing high-quality first-party content.
  • Earning credible third-party mentions.
  • Reinforcing positive customer reviews.
  • Addressing misinformation directly.
  • Improving structured data.
  • Maintaining accurate Wikipedia or Wikidata entries where applicable.

ChatGPT ads favor clarity over creativity, new data shows

2 April 2026 at 20:30
Optimizing for ChatGPT Shopping: How product feeds power GEO

The new ChatGPT ad format is standardizing, according to a new Adthena analysis of 40,000+ daily placements. What once felt experimental is becoming a disciplined, high-intent system for users already deep in decision mode.

The big picture: ChatGPT ads are converging on a short, structured, highly contextual style that favors precision over persuasion and utility over storytelling, marking a shift from creative-led advertising to real-time, intent-driven assistance.

By the numbers. Every word must carry weight and contribute directly to clarity or conversion:

  • The average headline clocks in at just 30 characters and around 5 words.
  • Body copy averages 116 characters and roughly 19 words.

What’s working. The dominant pattern is a “Brand: Benefit” headline, separating the name from a specific value. It works because users in conversational environments expect immediate clarity, not intrigue or ambiguity.

  • Almost every ad leads with the brand name. You need easy recall in a setting where users are already evaluating options, not discovering them.

Headlines are compressed. Headlines often read like functional labels rather than slogans. This brevity carries into the body copy. It typically uses two tight sentences: a proof point followed by an offer or nudge, showing you’re not trying to win an argument but give one compelling reason to act.

Context mirroring is a defining feature. The strongest ads directly reflect the user’s query or situation, signaling real-time tailoring. This marks a new level of AI-native targeting that goes beyond keyword matching into conversational relevance.

Concrete value signals carry outsized weight. Dollar signs and specific numbers — prices, savings, performance — consistently outperform vague claims. Numbers dominate body copy because they feel credible and native in a setting where you’re actively researching and comparing options.

Offers. Low-friction offers — especially “free” trials or demos — are the most common conversion lever, reducing commitment barriers while users are exploring.

Calls to action. These are explicit and action-oriented, favoring direct phrases like “Shop now,” “Compare,” or “Book” while abandoning generic prompts like “Learn more.”

The overall tone. Calm, confident, and measured, with minimal exclamation points or question marks. It aligns more with helpful guidance than ad hype, helping ads blend into the conversational flow rather than disrupt it.

Why we care. ChatGPT ads reach users at high intent, where clarity and relevance matter more than creativity or storytelling. In a conversational environment, ads compete with useful answers, so vague or overly branded messages get ignored while precise, value-driven copy performs better. This shift rewards short, structured messaging and gives early adopters an advantage as the format standardizes.

Between the lines. While ChatGPT ads share DNA with paid search — especially in their focus on intent and relevance — they differ by integrating into dialogue, responding to high-intent users, and delivering messaging that feels assistive rather than interruptive.

The takeaway. Success in ChatGPT advertising depends on precision, relevance, and credibility over creativity, emotional appeal, or brand-led storytelling. The winning strategy: fit in perfectly when a user needs a clear, trustworthy answer.

The analysis. Adthena CMO Ashley Fletcher shared the data on LinkedIn.

Build your marketing ark: A framework for AI, empathy, and design

2 April 2026 at 19:00
How to design AI-powered marketing systems that reduce friction and burnout

There’s a flood coming. A downpour of noise — more content, more channels, more AI-generated everything, moving faster than most teams can keep up with. Somewhere in that volume, your customers are quietly drowning — overwhelmed, underserved, and one bad experience away from choosing someone else.

You’ve probably felt it on your team, too. Another tool. Another sprint. Another quarter of doing more with less. The productivity metrics look fine from the outside. But inside, people are running on empty.

There’s an old story about a man named Noah who, facing catastrophic disruption, didn’t freeze or panic. He didn’t look for shortcuts or try to outswim the storm. He built — with intention, with a clear design, and with people he trusted. When the waters rose, the ark held.

The brands that lead don’t adopt the most technology the fastest. They build with intention — designing systems and experiences that protect people.

What follows is the case for building your ark — and a practical framework to do it.

The hidden emotional tax nobody is measuring

Customer-obsessed organizations achieved 49% faster profit growth and 51% better customer retention rates than their peers, according to Forrester. The gap between what customers need emotionally and what brands deliver comes down to design.

The strain isn’t only on the customer side.

  • AI power users report that it makes their overwhelming workload more manageable (92%), boosts creativity (92%), and helps them focus on their most important work (93%), per Microsoft and LinkedIn’s Work Trend Index,.
  • Yet, 60% of leaders say their company lacks a concrete AI vision or plan — meaning the very tool that could relieve team burnout is sitting underutilized. 

That gap shows up in real ways.

For customers, it creates friction — too many choices, unclear navigation, and messaging that misses where they are. They arrive with a question and leave with more confusion. They don’t feel seen or helped.

For marketing teams, the impact is quieter but just as serious:

  • Decision fatigue disguised as strategy.
  • Tool overload framed as innovation.
  • Burnout that looks like productivity — until it doesn’t.
  • Fragmented workflows that drain energy faster than they produce results.

Brands that recognize these human issues move faster, retain stronger talent, build deeper customer loyalty, and drive better business outcomes. Enter what I call the wellness sweet spot.

Where AI, empathy, and design come together

The wellness sweet spot is the moment where AI, empathy, and human-first design converge — creating conditions where both your customers and your team can think clearly, act confidently, and trust the experience they’re in.

It’s an architectural decision about how your entire marketing ecosystem is designed to make people feel. When its three pillars are genuinely working together, four things become true simultaneously:

  • AI reduces waste and cognitive load in the experience — making things simpler.
  • Emotional friction is intentionally minimized at every touchpoint.
  • Marketing teams operate from a foundation of wellness (and well-being).
  • Systems and workflows support human thriving, not just throughput.
The convergence of AI capability, empathy-led design, and human-first systems

When these conditions are in place, something shifts. AI stops feeling like a disruption and starts working as a stabilizing layer — supporting, protecting, and quietly holding the system together. It manages the overwhelm. The ark keeps floating.

Dig deeper: How to avoid decision fatigue in SEO

AI as an invisible wellness layer

Most marketing leaders still think about AI in terms of what it does — automate, generate, optimize, analyze. Those outcomes matter, but they don’t tell the full story. The more consequential question is how AI makes people feel while it’s doing those things.

For customers, AI used well is a guide that:

  • Summarizes complexity without dumbing it down.
  • Narrows choices in ways that feel helpful rather than manipulative. 
  • Anticipates what someone needs next and removes ambiguity from decision paths. 
  • Saves time — which is, in a very real sense, saving emotional energy.

For teams, thoughtfully deployed AI absorbs the work that depletes people most: the repetitive, the reactive, and the administrative. It creates space for what human brains do best: strategy, creativity, relationship-building, and nuanced judgment.

When you build your marketing systems around it, the output quality goes up because the people producing it aren’t running on fumes.

This is empathy at scale. Not the kind that lives in a tagline, but the kind that’s baked into how your systems are structured and how your content is designed to reach people.

Get the newsletter search marketers rely on.


The new emotional metrics: What to measure when you start caring about feelings

This is where things get practical and start to move ahead of the curve. Most marketing dashboards show what happened — click-through rates, conversion rates, and time on page. Those metrics matter, but they don’t explain why someone left or how they felt along the way.

Emotional metrics help fill that gap by focusing on the conditions under which decisions are made. Research in psychology and neuroscience shows that people make better decisions, build stronger brand relationships, and become more loyal when they feel clear, confident, and calm.

Here’s how traditional metrics map to emotional KPIs:

Traditional metricEmotional KPIWhat it measures, reimagined
Time on pageClarity indexHow quickly someone finds what they need — without confusion
Conversion rateDecision effort scoreCognitive load required to complete an action
Engagement rateCustomer calm markersBehavioral signals of confidence, not stress (Qualified attention)
Team output volumeWellness throughputStrategic output produced with reduced burnout

These are upstream indicators that help explain downstream performance. A low clarity index often shows up as stalled conversion rates. A high decision effort score can lead to rising cart abandonment. Declining wellness throughput tends to result in average output from top strategists.

Brands that start tracking these now gain an advantage over those that wait to react.

5 steps to design toward your wellness sweet spot

A caution before the roadmap: more speed and scale applied to a broken system will not fix it. It will amplify everything that’s wrong with it. These five steps are meant to be done before you push harder on AI adoption.

Step 1: Run an empathy audit

Where are customers confused? Hesitating? Leaving? Map these moments using behavioral data combined with qualitative insight — customer interviews, session recordings, support tickets, search data. Focus less on what people clicked and more on where they felt lost.

Step 2: Simplify for cognitive ease

Fewer choices. Plain language. Cleaner navigation. Every step you remove from a decision path is a small act of respect for your customer’s mental energy. This is generous. It’s designing with intelligence.

Step 3: Use AI as a shepherd

Deploy AI to enhance orientation, clarity, and confidence. Don’t push aggressive automation or manufacture a sense of urgency. AI should make customers feel helped, not herded. There’s a difference, and your audience feels it.

Step 4: Rebuild team workflows around energy

Audit where your team’s cognitive energy actually goes each week. Identify the work that is routine, reactive, or repetitive — and build AI into those gaps first. Protect the hours that require human judgment, creativity, and relationship-building. Those are the hours that drive real growth.

Step 5: Measure the feels

Begin tracking emotional outcomes alongside performance metrics. Start simple: add a one-question post-interaction survey. 

Review search data for confusion signals. For example, growing volume for “how do I” or “why can’t I” phrases on your own site may indicate your content isn’t answering questions before they’re asked. 

Monitor support ticket themes for friction patterns. A perfect measurement system isn’t required to start. The intention to look is.

Dig deeper: The secret to work-life harmony in SEO: Setting boundaries

The future belongs to emotionally intelligent brands

In a market where nearly every brand claims to be customer-centric and frictionless, the real differentiator comes down to how people feel and whether systems consistently deliver on that promise.

Leading organizations don’t rely on bigger AI budgets. They align technology with clear intent, prioritize well-timed, empathy-led content over volume, treat customer well-being as part of the brand promise, and protect their teams’ energy as rigorously as performance.

Creating value starts with protecting the people who create it. Noah didn’t survive the flood by ignoring it or fearing it. He paid attention, took action, and built with intention — something designed to carry what mattered most: his people, his purpose, his peace, and his future. That’s the kind of leadership this moment calls for.

You don’t have to figure this out alone. The tools are here. The framework is yours. The decision is whether to build before the pressure hits or react once it’s already underway.

Why your content doesn’t appear in AI Overviews (even if it ranks in the top 10)

2 April 2026 at 18:00
Why your content doesn't appear in AI Overviews (even if it ranks in the top 10)

You’ve done everything right. You have a fast website with comprehensive content, pages ranking in the top 10, and a strong backlink profile. Yet when you search the query you rank for, your site doesn’t appear in Google’s corresponding AI Overview.

This is a retrieval problem, not a ranking issue. And the difference between the two is the most important shift SEOs need to understand right now.

AI Overviews don’t work like traditional organic rankings. Instead of considering which page has the most signals, AI Overviews look for the page that gives the cleanest, most usable answer.

If your content doesn’t meet that standard, your traditional search ranking is irrelevant. Here’s what’s going wrong, and how to fix it so your content appears in more AI Overviews.

The ranking-citation gap is real — and growing

The overlap between AI Overview citations and organic rankings grew from 32.3% to 54.5% between May 2024 and September 2025, according to a BrightEdge study.

This trend sounds encouraging. But it also means that even at peak convergence, nearly half of all AI Overview citations come from pages that don’t rank at the top of organic results. Google actively bypasses higher-ranking pages when it finds content that better serves the AI Overview format.

The pattern varies sharply by sector, though. BrightEdge data shows that in ecommerce, the overlap barely changed, remaining essentially flat over the entire 16-month period. And in your money or your life (YMYL) categories like healthcare, insurance, and education, the overlap between AI Overview citations and organic rankings ranges from 68% to 75%.

Ranking and visibility are no longer the same thing. You can rank second and be invisible. Or, you can rank on the second page and be the first thing a searcher reads.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

5 reasons AI Overviews skip your content

1. Your content answers the wrong version of the question

Informational queries — specifically long-tail and conversational searches — typically trigger AI Overviews. Informational queries drive 57% of AI Overviews, while commercial queries trigger this AI feature far less frequently, according to Semrush research.

Google’s AI engine  looks for content that matches what the user asks, not just the keyword you’ve targeted. So, an AI Overview answering the query “what’s the best way to manage a remote team’s workload?” probably won’t cite a page that ranks for the keyword “project management software” and leads with features and pricing.

2. You’ve buried the answer

If your introduction spends three paragraphs establishing context, warming up the reader, or restating the question before answering it, the retrieval system moves on. It seeks information it can extract cleanly. If that answer isn’t near the top of the page, the system skips that page.

3. Your structure is opaque to AI systems

Traditional SEO content is built around comprehensive long-form content: 3,000-word guides covering every angle of a topic, written for readers who scroll and skim.

AI retrieval systems don’t work the same way. They need to identify discrete, self-contained answers within your content.

That requires clear heading hierarchies, short paragraphs, and content that AI systems can extract. A section under a specific heading should completely answer the question posed in that heading, without requiring the surrounding context to make sense.

Content written as one long, unbroken narrative is harder for AI systems to parse. Even if every word is accurate and authoritative, it may not earn a citation if the structure doesn’t help the retrieval system identify individual answer units.

Dig deeper: AI Overview citations: Why they don’t drive clicks and what to do

Get the newsletter search marketers rely on.


4. Your E-E-A-T signals aren’t visible at the content level

Google has been clear that experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals are important for content quality in traditional search. It likely matters for AI Overviews, too. But these signals need to appear in the content itself, not just in your domain profile or link graph.

Strong domain authority counts for less than you’d think if the content itself carries no credibility signals.

  • Who wrote it?
  • Where did the data come from?
  • Is there anything here that couldn’t have been written by someone who’d never worked in this field?

A retrieval system evaluating an individual page doesn’t know your domain’s track record. The page must make the case for itself. 

Content-level E-E-A-T signals are particularly important in YMYL categories, where AI Overviews are selective about sources because the risk of misinformation is higher.

5. You’re targeting queries that don’t trigger AI Overviews

Before optimizing your content for AI engines, it’s worth checking whether your target queries trigger AI Overviews at all. As of late 2025, AI Overviews appear in 16% of search results, though that figure isn’t evenly distributed across query types.

Transactional queries, navigational searches, branded queries, and highly local searches are far less likely to trigger an AI Overview. If most of your traffic comes from commercial or transactional keywords, the lack of AI Overview citation may not be a content problem. It may simply be that those query types are less likely to generate overviews in the first place.

What the data tells us about the impact of this shift

The stakes are significant. Research by Seer Interactive shows that organic click-through rates (CTRs) for informational queries that displayed AI Overviews dropped 61%, from 1.76% to 0.61%, between June 2024 and September 2025. Paid CTR fell even further, from 19.7% to 6.34%.

But the same research reveals a critical asymmetry: Brands cited in AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than when they weren’t cited. A citation in an AI Overview doesn’t just protect you from a CTR decline. It actively amplifies your visibility.

The Pew Research Center’s study of searches by U.S. adults in March 2025 found that only 8% of users who encountered an AI Overview clicked a traditional search result, compared to 15% who clicked when no overview appeared. And 26% of searches with AI Overviews resulted in no clicks at all.

If AI Overviews appear for your most valuable queries and you aren’t cited, you aren’t just missing out on the overview. You’re losing clicks you previously received from the organic listing underneath it.

How to optimize for retrieval, not just rankings

These trends require you to adjust how you think about content structure and intent. Here’s where to focus:

  • Rewrite your introductions: Your first paragraph should directly and completely answer the primary question of the page. Save context and elaboration for later sections. Write as if the first 100 words of your page represent a standalone answer.
  • Restructure your headings: Each heading should be a question or a complete, specific claim. The following section should fully answer or support that heading without requiring the reader to review previous sections. Think of each section as a self-contained answer unit.
  • Add explicit expertise signals: Include author attribution with credentials, first-person experience language, original data, and links to primary sources and original research. These signals matter at the content level, not just at the domain level.
  • Audit your query triggers: Manually test your target queries in Google to see which ones actually generate AI Overviews. For those that do, study how the cited sources are structured, the length of the cited sections, and the format of the answer. Use that as your editorial brief.
  • Expand your topical coverage: AI Overviews favor sources that demonstrate breadth of knowledge across a topic, not just single-page depth. Focus on answering several related questions well instead of building one exceptional page surrounded by thin content.

Dig deeper: Want to beat AI Overviews? Produce unmistakably human content

How to shift your SEO approach

What AI Overviews represent is something that’s been discussed for years, but few have truly prepared for: the separation of content quality from ranking signals.

For two decades, we used rankings as a proxy for quality. High-ranking content was, by definition, good enough.

But that assumption no longer holds. Ranking in traditional search indicates that your brand has authority and that your page is relevant to the search query. It says nothing about whether your content is structured in a way that AI retrieval systems can use.

Visibility now goes to whoever understands how AI systems identify, extract, and surface answers. A strong backlink profile won’t help you if the answer is buried on page three of a 4,000-word guide.

Ranking in the top 10 is still worth pursuing. But it’s no longer the whole game.

6 Google Ads mistakes that hurt ecommerce campaigns

2 April 2026 at 17:00
6 mistakes that hurt ecommerce campaigns on Google Ads

Your paid social operation is on fire. You know how your audience thinks, the creative process is dialed in, and the results get better every year. Leadership greenlights an expansion to Google Ads — a new channel and, critically, a new source of revenue.

As it turns out, applying that same strategy really just buys you an express ticket to a very difficult conversation.

Google rewards a different kind of thinking. Intent signals and campaign logic are different, and the mistakes that eat at your budget don’t always make themselves clear. Brands that apply their existing Meta playbook often find themselves looking at shiny dashboards and dull balance sheets.

These six common mistakes tend to do the most damage before anyone realizes what’s happening. They’re what we see most often when ecommerce brands come to us after making the move to Google — and they can all be reversed.

Mistake 1: Treating Google like a retention channel

You can definitely use Google Ads to support retention and brand defense. The problem is when that becomes your whole strategy.

We see this regularly with brands new to the platform who launch directly into Performance Max. Early ROAS looks strong, and everyone’s happy. But a few months in, someone asks the right question: Are we actually growing, or paying to capture purchases that were going to happen anyway?

One client we worked with came to us with branded search and retargeting doing the heavy lifting inside PMax – essentially a tax on demand that had already been created elsewhere. Revenue flatlined because, while the ad spend was real, growth was not.

Net-new customer acquisition requires a different setup.

  • Shopping campaigns structured to surface products to people who have never heard of the brand.
  • Search campaigns built around non-branded, high-intent keywords.
  • Layered PMax configurations that limit the system from defaulting to the easiest conversions.

When Google has enormous reach into new audiences, treating it purely as a closing channel leaves most of that opportunity untouched.

Dig deeper: Ecommerce PPC: 4 takeaways that shape how campaigns perform

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Mistake 2: Not knowing how to get the most out of Google’s core levers

Paid social experience transfers to Google in some ways, but there are four areas where we see the biggest knowledge gaps.

Search intent

Ads on social media are an interrupting moment. Ads in search engines meet people as they’re looking for something you offer. This changes so much about campaign structure, ad copy, and keyword targeting. 

Upper-funnel terms and lower-funnel terms require different approaches, bids, and landing pages. Collapsing them into a single campaign structure is one of the fastest ways to dilute intent and waste budget on traffic that was never going to convert.

Data feed optimization

For ecommerce brands running Shopping and retail Performance Max, the product feed is the foundation everything else is built on. Weak titles, missing attributes, and poor categorization limit how often your products show up and who sees them. 

Most brands (including Google-native ones) underinvest here because the work is unglamorous. But a well-optimized feed consistently outperforms one that’s neglected after setup.

Keyword research

Paid search is a keyword-driven channel, which makes keyword strategy its own discipline. Understand match types, search volume, commercial intent, and the relationship between what people type and what they actually want. This takes time to develop, but brands that skip this step usually over-restrict their reach or bleed spend on irrelevant traffic.

Landing pages

Sending high-intent but unfamiliar visitors straight to a product page on Google often underperforms. A more engaging landing page format, like an advertorial, puts that traffic in front of context and trust before asking for the sale. 

Brands coming from paid social often overlook this because the funnel architecture they’re used to doesn’t require it.

Dig deeper: 7 Google Ads search term filters to cut wasted spend

Mistake 3: Letting operational issues interrupt campaign momentum

Google’s algorithms need consistent data to make the best decisions for your account. But every time a campaign goes dark — for a day or a week — there’s a risk that the learning resets. What feels like a minor admin issue can mean weeks of degraded performance and wasted ad spend.

Two types of disruption come up more than any other.

  • Payments: Brands switching to invoice billing or changing card details mid-flight will sometimes see campaigns pause without realizing it until the damage is done. A lapsed payment that takes three days to resolve can cost far more than the bill itself once you factor in recovery time.
  • Tracking and feed integrity: A broken pixel means no conversion data, and forces Smart Bidding to optimize blind. A feed error in Merchant Center means products disappear from Shopping and Performance Max. Neither of these failures are loud, and they tend to surface slowly as declining performance that gets misattributed.

They are both preventable with automated alerts, weekly feed audits, and a person or AI agent responsible for monitoring account health between reporting cycles. The cost of oversight is low compared to what happens if you only discover issues after the fact.

Get the newsletter search marketers rely on.


Mistake 4: Building a campaign structure that’s too granular

The instinct among detail-oriented advertisers is to segment everything because it feels like control on the surface.

  • One campaign per product category.
  • One ad group per keyword.
  • Separate budgets for every audience.

But Google’s automation needs data to make good decisions. When you spread your budget across too many campaigns, each one operates on thin resources and even thinner information. Smart Bidding can’t optimize effectively without sufficient conversion volume, so campaigns stuck below that threshold tend to underperform and stay there.

By over-segmenting, you’ve created the appearance of precision while actually limiting the system’s ability to learn.

The same logic applies to budget. Ten campaigns with a modest shared budget will almost always produce worse results than three well-funded ones. Google needs room to test, adjust, and find the traffic worth paying for. Fragmented budgets don’t allow it to do that.

Build a tighter structure with fewer campaigns, clearly defined goals, and enough budget to compete. This gives the algorithm what it needs while keeping the account manageable enough to oversee effectively.

Dig deeper: How to find and fix the root cause of low conversions

Mistake 5: Leaving campaigns on Max Conversion Value with no ROAS targets

Max Conversion Value is a Smart Bidding strategy that tells Google to spend your budget in whatever way generates the highest total conversion amount – no ceiling, no floor, no efficiency guardrail. Left unsupervised, it will find conversions, but won’t care what it costs to get them.

For brands new to Google Ads, this setting can trick you into thinking you’re crushing it. Conversion value shoots up in the right direction, making the account appear healthy. The problem surfaces when you look at what you actually spent to generate that value.

Without a target ROAS, Google has no efficiency quotient, and optimizes for volume, not profitability. But the fix is straightforward.

  • Once you have enough conversion data, set a realistic target.
  • A ROAS goal gives the algorithm a constraint, and shifts the objective from spending budget to spending it well.
  • Targets set too aggressively too early can starve campaigns of traffic before they’ve had a chance to learn.
  • Exercise patience, and a willingness to adjust gradually rather than chasing the ideal number from day one.

Dig deeper: How each Google Ads bid strategy influences campaign success

Mistake 6: Underfunding campaigns and keeping them stuck in learning

When you launch a Google campaign or make a significant change (like doubling the budget), it enters a new learning period. This is the window for gathering data, testing different auctions, and calibrating toward the conversion patterns you’ve defined.

It’s a normal part of how the platform works, and every campaign goes through it.

But the learning period requires a minimum volume of conversions to complete. Google typically needs around 30-50 conversion events in a short window before bidding stabilizes. A campaign that’s underfunded for this milestone will stay in learning indefinitely.

It’s a common trap for brands being cautious when testing Google.

  • You run your first campaign on a small budget.
  • CPAs are inflated, and data is inconclusive, so you don’t invest more or cut it entirely.
  • In reality, the campaign never had what it needed to graduate out of the learning phase.
  • You walk away from net new revenue before you’ve even scratched its surface.

Funding a new campaign adequately from the start — even if it means consolidating into fewer campaigns and chasing fewer goals — gives it the best chance of learning fast and delivering accurate results sooner.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Adding Google to the mix is the right call: Here’s what to do next

Diversifying away from a single ad platform is one of the smartest moves an ecommerce brand can make once it’s mature enough to fight on two fronts. It removes growth from the anchor of one platform’s algorithm changes, auction dynamics, seasonality, terms of service, etc.

Adding Google to Meta also gives you access to a different kind of demand that is actively expressed rather than passively targeted, which is a meaningful advantage worth building on.

These six mistakes are not reasons you should avoid Google, but a preventative guide to help you approach it with realistic expectations and enough patience to let the system learn. Treating it like a direct analog of what you’re already doing on Meta will make you leave before seeing what’s truly possible.

If you’re still in the early stages of making this move, my guide on how to expand from Meta Ads into Google Ads is a practical place to start. If you’ve seen early success and are now looking for the next layer of optimization, find out how to avoid getting sucked into Google’s many automation traps.

❌
❌