Normal view

Today — 29 April 2026Search Engine Land

4 signals that now define visibility in AI search

29 April 2026 at 17:00
4 signals that now define visibility in AI search

Ranking and visibility are no longer the same thing. For 20 years, SEO teams optimized for SERP position. Higher rankings meant more visibility, more clicks, and more traffic. That relationship is breaking down.

Earlier this year, Ahrefs found that only 38% of pages cited in Google AI Overviews also ranked in the traditional top 10. Eight months earlier, that number was 76%.

The implication is straightforward: being highly ranked no longer guarantees being seen.

In AI-generated answers, visibility is determined by inclusion — and by how your brand is represented when it appears. That representation is determined by a different set of signals.

How visibility works in AI search: 4 signals that matter

Four distinct patterns determine how brands appear inside AI-generated responses: 

  • Mention order.
  • Depth of explanation.
  • Authority signals.
  • Comparative positioning.

1. Mention order

When an AI model lists three CRM options, the order matters. Up to 74% of users choose the AI’s top recommendation, according to a Growth Memo and Citation Labs AI Mode study.

This reinforces how heavily people rely on the first option presented. 

About 26% of users overrode the AI’s order entirely when they recognized a brand they already knew. This is a shift from how users behave in traditional search. And 56% of users built their own shortlist from multiple sources. In AI Mode, 88% took the AI’s shortlist without checking further. 

The AI’s curated answers carry that much weight. But mention order isn’t stable. SE Ranking’s August 2025 analysis found that when you run the same query three times, AI Mode only overlaps with itself 9.2% of the time

The sources change. The order changes, sometimes dramatically.

The lesson: Mention order creates an advantage, but it isn’t deterministic. Brand recognition can trump position.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Depth of explanation

Not all mentions are created equal. Some brands get a single sentence. Others get a full paragraph explaining their strengths, use cases, and differentiators.

The difference comes down to how much citation-worthy information AI systems found about you.

When Semrush announced its AI Visibility Awards in December 2025, it analyzed more than 2,500 prompts run through ChatGPT and Google AI Mode. Category leaders like Samsung in consumer electronics didn’t just appear more often. They got more detailed descriptions when they did appear.

Challenger brands like Logitech in gaming accessories showed up, too, but typically with shorter mentions focused on a single differentiator.

The top 4.8% of URLs cited 10+ times by ChatGPT share a common trait. They’re comprehensive pages that answer “what is it,” “who uses it,” “how to choose,” and “pricing” in a single URL.

Word count seems to matter, too. Pages above 20,000 characters average 10.18 citations each. Pages under 500 characters average just 2.39.

The lesson: If AI systems have thin data about your brand, you get thin mentions.

3. Authority signals

AI systems don’t just cite sources. They characterize them by tone, which reveals how much confidence the AI has in your authority.

HubSpot’s AEO Grader, launched in early 2026, classifies brands into competitive roles: leader, challenger, or niche player. They’re positioning labels that determine how persuasively AI presents you.

Semrush’s awards data showed that category leaders have less than 20% monthly volatility in AI share of voice. Once AI systems establish you as a leader, that perception tends to stick.

The language reflects this correlation. 

  • Leaders get described with confident phrasing, such as “the industry standard” and “widely recognized.” 
  • Challengers get “growing alternative” and “gaining traction.”

Most brand mentions in AI answers are neutral or positive. But neutral isn’t the same as enthusiastic.

The difference between “also offers project management features” and “considered one of the top three project management platforms” is authority signaling.

The lesson: AI doesn’t just say your name. It frames your reputation.

Get the newsletter search marketers rely on.


4. Comparative positioning

Comparative positioning is the closest thing to traditional rankings in AI answers: how you’re positioned when multiple brands appear together. But instead of Position 1 vs. Position 2, it’s “better for X” vs. “better for Y.”

Amsive’s research found clear positioning hierarchies. 

  • In banking, Bank of America leads with 32.2% visibility, SoFi follows at 25.7%, and LightStream captures 20.2%. 
  • In healthcare, Mayo Clinic dominates at 14.1%.

Kevin Indig’s Growth Memo research revealed a critical nuance. When AI positioned a brand as “best for startups” versus “best for enterprises,” users self-selected based on that framing, even if both brands technically served both segments.

The lesson: You’re not competing for position 1 anymore. You’re competing to own a specific positioning niche in AI’s mental model of your category.

How traditional rank correlates with AI visibility (barely)

We already covered the 38% overlap stat. The interesting question is why it dropped so fast. The answer: query fan-out.

When an AI Overview triggers, Google doesn’t just evaluate the top-ranking pages for the user’s actual query. It breaks the question into multiple sub-queries, retrieves relevant passages from across its index, and synthesizes them into a single response.

Your page might rank No. 1 for “best project management software” and still get skipped. The AI pulled from pages ranking for “project management for remote teams” or “integrations with Slack” instead. One query to the user. A dozen queries behind the scenes.

SE Ranking’s February 2026 research found that Google’s upgrade to Gemini 3 replaced approximately 42% of previously cited domains and generates 32% more sources per response than its predecessor. Traditional ranking positions became even less predictive overnight.

Where AI traffic actually goes

Semrush’s analysis of 17 months of clickstream data reveals an unexpected pattern: Over 20% of ChatGPT referral traffic goes to Google. That share rose from roughly 14% at the start of the study to more than 21% by early 2026.

The biggest beneficiary of ChatGPT’s growth is Google. 

Users go to ChatGPT to get an answer, then head to Google to confirm findings or research brands they just discovered. For users, they’re complementary steps in a single journey.

Most ChatGPT prompts don’t match traditional search language. Between 65% and 85% of prompts couldn’t be matched to any traditional search keyword in Semrush’s database of 27 billion keywords.

  • A traditional Google search: “best project management software.” 
  • The ChatGPT equivalent: “I manage a 12-person remote engineering team, and we’re constantly missing sprint deadlines. What should I change about our weekly standups?”

That level of specificity doesn’t exist in keyword databases — and it’s becoming more common.

Measuring visibility in AI answers

If position doesn’t matter the way it used to, what does?

  • Citation frequency replaces rankings as the primary metric. How often does your brand appear when AI systems answer questions in your category?
  • Brand mention rate measures penetration. If AI generates 100 answers about your category, what percentage mention your brand? Scores above 70% indicate strong AI search performance. Below 30% signals significant visibility gaps.
  • Recommendation rate matters more than mention rate for B2B SaaS and high-consideration purchases. Being recommended carries more weight than being mentioned in a general list.
  • Sentiment and context determine whether mentions drive action. Track how AI describes you: premium vs. cheap, advanced vs. beginner, reliable vs. experimental.
  • Citation position within answers creates measurable advantage. Unlike traditional rankings, you can be first-cited without being first-ranked organically.

The measurement infrastructure you actually need

Traditional rank trackers can’t measure these signals.

The 2026 measurement model requires parallel tracking. Traditional SEO metrics still matter for the portion of search that remains blue links. AI visibility requires tracking how often your brand appears and how it’s represented in AI-generated answers.

A new category of tools has emerged to support this shift.

  • For citation tracking, platforms like Profound, Gauge, Peec AI, and Scrunch monitor which URLs get cited across ChatGPT, Perplexity, Claude, and Google AI Overviews.
  • For brand analysis, tools like Semrush’s AI Visibility Toolkit and AthenaHQ measure how often your brand is mentioned, how it’s described, and whether it’s recommended.
  • For competitive positioning, Bluefish and HubSpot’s AEO Grader evaluate how AI systems categorize your brand relative to competitors.

None of these tools replace traditional SEO infrastructure. They supplement it.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

A different model of visibility

The ranking obsession isn’t going away entirely. Traditional search still drives traffic. But measuring success solely through rankings misses the larger shift.

AI answer engines now act as gatekeepers, surfacing only the brands they consider citation-worthy.

Visibility depends on how often you’re included, how you’re described, and how you’re positioned relative to competitors.

Traditional rank trackers can’t capture that. It requires a different measurement model. That’s what determines visibility now.

Searchers just want you to be helpful

29 April 2026 at 16:00
Searchers just want you to be helpful

The March 2026 core update brought what Google describes as a design “to better surface relevant, satisfying content for searchers from all types of sites.” This confirms the simplest truth in search: people use Google to get answers. 

Whether it’s solving a problem, learning something new, or making a decision, searchers want content that is genuinely helpful in their busy, on-the-go lives. If your content does that, it succeeds. If it doesn’t, no amount of SEO tricks, hacks, or magic bullets will get your content to show up on page one, let alone in AI Overviews.

How modern search systems surface helpful content

AI Overviews went from appearing for just 6.49% of queries in January 2025 to 15.69% in November 2025 according to a Semrush study. Depending on the source today, AI Overviews appear for 25-50% of queries.

It’s clear that search engines and LLMs are working together more efficiently today than just a year ago. Fast forward another year, and we can only imagine. 

For any SEO focused on creating helpful content and understanding user intent, it’s a truly exciting time to be in the industry. Your genuinely useful content can be surfaced in AI Overviews using retrieval-augmented generation (RAG) and query fan-out.

  • RAG: Instead of just relying on what it “knows,” AI looks for relevant information across multiple sources before answering a query
  • Query fan-out: One search query can be broken down into multiple related queries behind the scenes, helping AI and search engines build a more complete, useful response

Entire papers have been written on these two concepts alone. The TL;DR is that SEO today is about more than just keywords or counting backlinks. Modern search is designed to connect searchers with content that actually answers their questions and satisfies user intent.

Why this raises the bar for SEO in 2026 and beyond

These systems, and those still being implemented (see Google’s blog on TurboQuant), are getting better at recognizing and dismissing thin, duplicate, or superficial content. Pieces that simply restate what someone else has already said online, lack originality, and fail to demonstrate legitimate real-life experience will continue to struggle to rank. 

Depth, clarity, and expertise have always mattered, but SEOs who want to continue to succeed in 2026 and beyond are going to have to double down on these factors:

  • Depth: This doesn’t mean write as much as you can on the topic. Gone are the days of fluffy, keyword-stuffed articles. Depth in 2026 means SEOs and content creators should address the searcher’s main question and related follow-ups.
  • Clarity: Searchers are busy. They want quick answers. Make your content easy to scan and understand.
  • Expertise: Demonstrate real-world knowledge and experience your audience can trust.

For many SEOs, this is a welcome shift. It’s not about just checking off boxes anymore. 

Sure, we still have to do those things. But the bar for what constitutes good SEO is being raised far beyond the basics. When search engines evaluate content today, they’re looking for signals that SEOs and content creators are providing real value to searchers.

Why visibility matters more than clicks for local SEO

Small, local, or service-based businesses that rely on SEO-driven leads for revenue can use these same strategies, too. While success isn’t measured using the same metrics as it was just a couple of years ago, the result of good SEO remains: Get the business recommended before the competition for as many searches as possible. 

Two years ago, this meant clicks. Today, it means visibility. AI platforms like ChatGPT, Gemini, and AI Overviews often recommend businesses without linking to websites directly, if at all. 

A few tools have been developed to measure AI metrics, but these can get pricey, and as Elizabeth Rule said, “Measuring visibility is like trying to measure a wave with a ruler.” 

This is why maintaining strong communication between stakeholders and the SEO team is so important. When success can’t be measured simply, a simple question of “how’s business going?” matters now more than ever. Beyond user intent, SEOs need to understand user behavior, mood, and temperament.

What ‘helpful content’ looks like in practice

Here are five tips to get you started on creating content that is genuinely helpful:

1. Answer follow-up questions

Think beyond the initial query. What will readers ask next? 

One of my favorite places to do research for this is the People Also Ask (PAA) section on SERP. For example, you’re writing about herniated disc treatment. Just Google “herniated disk treatment” and use the PAA feature to help you brainstorm more questions your audience may ask about the topic you’re writing. The more questions you click, the more ideas it’ll generate.

2. Show expertise and experience

E-E-A-T is an SEO hill I will die on because it works. Share your knowledge, case studies, testimonials, or firsthand insights. This builds trust when done right and when you’re creating for people, not search engines. 

This is what the helpful content update of 2022 was all about.

Get the newsletter search marketers rely on.


3. Structure content clearly

We’d all love to believe that everything we write is being read word-for-word. It’s not. People skim. They’re looking for an answer while they’re doing other things. 

This is why clearly structured web pages are so important on both mobile and desktop. Use headings, bullet points, and concise paragraphs to help readers quickly find answers.

4. Be authentic

Authenticity sounds like a buzzword (and maybe it is), but people can tell when you’ve used AI to write something or when you’re just publishing content for SEO.

Much as it pains me (an English major who loves to read long novels and write dissertations) to say, no one cares about your personal anecdotes or how many adjectives you can think of for your “superior” service. They just need an answer to the question they searched. 

Avoid fluff or filler. Real-world, practical content resonates better than generic advice.

If someone called and asked you, “How long does it take to change the water heater in my 1950s home?” You wouldn’t need 1,500 words to answer them. The content you create on the internet should be the same.

5. Ask ‘who, what, and how?’ about your content

If you’ve been paying attention to GEO/AEO/SEO for AI, this might sound familiar to you as a little something called semantic triples. This sounds intimidating at first, but it’s really just sixth-grade English. 

A semantic triple answers who, does what, for whom (or how). Remember diagramming sentences? It’s the relationship between the subject, predicate, and object. It can be any subject, predicate, and object:

  • The plumber installs water heaters in Dallas 
  • The bakery bakes wedding cakes for couples 

I first heard about semantic triples from Mike King during SEO Week 2025 when he broke down his concept of relevance engineering. If you haven’t watched his video on this topic, I highly recommend it.

The basic idea is that SEO is about your audience:

  • Who are you talking to?
  • What do they need?
  • How do you reach them? 

A semantic triple answers these questions. It provides structure and clarity. It’s the “Who, What, and How” that Google told us about with the HCU documentation. It’s also genuinely valuable information for searchers.

Knowledge is your superpower. You’re the only person who can tell your story, explain your process, and show readers why your business or brand matters.

Helpfulness is the competitive edge

The most reliable SEO strategy remains the same with each new core update from Google: Create content that genuinely helps searchers.

Focus on the problems your audience is trying to solve, answer their questions fully, and share your expertise. Thin or derivative content won’t cut it in a world of AI-driven search and retrieval systems. 

Google and AI platforms are trying to do the same thing searchers are doing: find the most helpful content. If you respond to that need, your content will rise to the top, no tricks, hacks, or shortcuts necessary.

Gartner: 40% of agentic AI projects will fail, making humans indispensable by Optimove

29 April 2026 at 15:00

Fact: Agentic AI is making humans indispensable.

More than 40% of agentic AI projects will be canceled by the end of 2027. That is a prediction from Gartner published in June 2025, based on a poll of more than 3,400 organizations actively investing in the technology.

The reason cited is not that the agents do not work. It is that the humans deploying them are making the wrong decisions. “Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” according to Anushree Verma, senior director analyst at Gartner.

Organizations are deploying agents without a clear strategy, without understanding the complexity, and without the governance to manage what happens when something goes wrong.

In other words, the agent is only as good as the human behind it.

This matters enormously for marketing. AI agents in marketing are real, accelerating and in many cases, necessary. Agents that select audiences. Agents that generate content. Agents that optimize send times, choose offers and orchestrate entire customer journeys autonomously, continuously and at a scale no human team could match. The capabilities are here today and growing rapidly.

But Gartner’s data reveals a warning and marketing leaders who miss it will find themselves on the wrong side of that 40%.

FOMO causes agent failure

The failure rate Gartner describes is not random. It starts with fear.

Fear of being left behind. Fear of watching competitors move faster. Fear of being the CMO who did not act when everyone else did. That fear is driving organizations to deploy agentic AI, not because they have a strategy, but because they cannot afford to be last.

The result is agents built on broken workflows. Agents fed with poor data. Agents operating without the governance structures that keep them aligned with business goals. The agents execute… the wrong things, in the wrong ways, at the wrong times.

FOMO is not a strategy. And in the agentic era, it is an expensive mistake.

Agent washing

Gartner identified a widespread trend it calls “agent washing”… vendors rebranding existing chatbots and automation tools as agentic AI without delivering genuine autonomous capabilities. Of the thousands of vendors claiming agentic solutions, Gartner estimates only around 130 offer real agentic features. Marketing teams investing in the rest are not getting agents. They are getting dressed-up automation with an agentic price tag. automation with an agentic price tag.

The consequences go beyond wasted budget. Gartner predicts that in 2026, one-third of companies will harm customer experiences by deploying AI prematurely, eroding brand trust and damaging both acquisition and retention.

A personalization agent that misreads a customer. A content agent that violates compliance. A journey agent that floods a churning customer with offers at exactly the wrong moment. These are the predictable outcomes of deploying autonomous systems without the human judgment to direct them.

The dumbing down of marketers

Gartner’s third prediction is the most revealing of all. GenAI usage leads to the atrophy of critical thinking skills. As a result, 50 percent of global organizations will require AI-free competency evaluations.

Half of all organizations are watching their people get dumber because AI is always available to think for them. Quietly. Gradually. Until the day the algorithm is wrong and nobody in the room can tell.

In marketing, that is a crisis. Marketing requires judgment — the ability to ask not just what the data says, but what it means. Not just whether a campaign worked, but why. Not just whether to accept an AI recommendation, but whether it reflects the brand, the moment and the relationship the company is trying to build.

Those questions cannot be delegated to an agent. They require a human being scrutinizing what a machine thinks is right.

The most dangerous marketer in the agentic era is not the one who rejects AI. It is the one who accepts everything it produces without question.

Agents cannot be trusted to ask the right questions

An agent can optimize what it has been given. It cannot question whether it has been given the right thing.

It can personalize a message based on behavioral signals. It cannot decide that the right move is to say nothing at all… to give a customer space, to protect a relationship rather than extract from it.

It can generate a thousand content variations and test them. It cannot feel the difference between a message that converts and a message that connects. It cannot sense when a campaign that performs well in the data is quietly damaging the brand.

It can execute a journey flawlessly. It cannot design one that reflects what customers actually want from this brand, at this point in their lives.

These are not limitations that will be solved by the next model release. They are structural. AI is trained on the past. The irreducible human job in marketing is to bring judgment about what should happen next, even when the data does not yet exist to support it.

The marketer as manager of agents

The right mental model for the agentic era is not human versus machine. It is a human plus machine, with the human in charge.

That is the foundation of Positionless Marketing. For decades, marketing teams operated as an assembly line with handoffs. Positionless Marketing breaks that model by giving marketers three transformative powers: Data Power to immediately discover customer insights for precise targeting and hyper-personalization, without waiting for engineers; Creative Power to create channel-ready assets like copy and visuals, without waiting for creatives; and Optimization Power to run campaigns that optimize themselves through automated journeys and testing, without waiting for analysts. Handoffs are eliminated.

The Positionless Marketer is a multidisciplinary thinker who deploys AI agents to go beyond traditional positions. Agents handle what used to require waiting for three different teams, eliminating the assembly line. The marketer is no longer waiting on anyone. They are thinking bigger, moving across disciplines while keeping human judgment at the center of every decision the agents make.

This is a promotion, not a replacement. But it comes with real demands. Marketers who can think strategically, not just operationally. Who can evaluate AI output critically, not just accept it. Who can take accountability for what the agents do in their name.

Gartner’s Daryl Plummer stated it directly: organizations should prioritize behavioral changes alongside technological changes as first-order priorities. The technology is ready. The question is whether the humans in the marketing organization are.

The window is narrowing

The organizations that will win the next decade of marketing are not the ones that deploy the most agents. They are the ones that build the human capability to direct them well.Gartner’s 40% prediction is not a warning to slow down. It is a warning to be deliberate. The difference between an agentic marketing operation that compounds value over time and one that wastes budget, violates policy, and erodes customer trust is not the technology. It is the human judgment sitting above it.

Marketing teams need to face facts in the agentic AI era: the agent is only as good as the indispensable human behind it.

Yesterday — 28 April 2026Search Engine Land

LinkedIn expands Event Ads beyond its own platform

28 April 2026 at 21:36
LinkedIn Ads retargeting: How to reach prospects at every funnel stage

LinkedIn is rolling out Off-Platform Event Ads, giving marketers a new way to promote events without needing a native LinkedIn Event Page.

What’s happening. The new format allows advertisers to run Event Ads that link directly to external destinations — such as webinar platforms, landing pages or livestream sites — instead of keeping traffic on LinkedIn.

This marks a shift from platform-contained experiences to more flexible, marketer-controlled journeys.

How it works. Marketers can create an Event Ad using a third-party URL, add event details like date and format, and choose from objectives including awareness, engagement, traffic or lead generation.

Clicks send users directly to the external event page, while performance metrics remain trackable in Campaign Manager.

Why we care. Until now, promoting events on LinkedIn often meant working within platform constraints, which could fragment the user journey and limit control over registrations.

Off-Platform Event Ads remove that friction by allowing marketers to tap into LinkedIn’s targeting while keeping traffic, data and conversions on their own platforms — making it easier to scale campaigns and maintain a consistent experience.

What to watch:

  • Whether this drives higher registration rates compared to native Event Pages
  • How advertisers balance LinkedIn targeting with off-platform conversion tracking
  • If LinkedIn expands similar flexibility to other ad formats

Availability. Off-Platform Event Ads are currently rolling out globally and are expected to be available to all advertisers by May 6.

Bottom line. By opening Event Ads to off-platform destinations, LinkedIn is making it easier for marketers to scale event promotion — without forcing them to build inside its walls.

The framing gap: Why AI can’t position your brand

28 April 2026 at 19:00
The framing gap- Why AI can’t position your brand

This article translates the framework into practitioner register. The full theoretical model, including the formal mechanics, the testable predictions, and the academic engagement with current AI-reasoning literature, is developed in the academic working paper The Framing Gap: Strategic Claim Bridging and the Limits of Generative AI Interpretation in Brand Representation (Barnard, 2026, Zenodo: 10.5281/zenodo.19857447).

Every brand holds its claims, and somewhere in the archive of its digital life, there’s proof to back them up. 

The AI assistive engine (the systems behind ChatGPT, Perplexity, and Google AI Overviews) holds that proof too, scattered across its training data and retrieval index, alongside competitors’ claims. 

The audience has a need but no vocabulary to bridge the gap between what they want and what the brand or the engine already knows.

All three lack the same thing: a frame, the interpretive context that turns scattered information into a narrative worth transmitting (for the brand), citing (for AI), and acting on (for the user).

This is where the claim-frame-prove (CFP) process comes in. Claim and proof are mechanical, and frame is strategic.

Claim and prove are mechanical work the engine can verify. Frame is the strategic move only the brand can make.

Why AI can’t make the leap your brand needs

CFP runs claim by claim, fact by fact. The brand’s full picture is built from many CFP cycles compounding: each claim framed and proven becomes a fact in the corpus, and the cumulative weight of those facts is what positions the brand to dominate.

AI can join known facts, but it can’t leap to a new one that benefits your brand. Given Facts A and B, AI can derive Conclusion C that follows logically. That’s standard inference, and the engines do it well. 

What they can’t do reliably is the move a creative human makes constantly: looking at A and B, reaching to a non-obvious J that’s commercially beneficial, and then constructing the logical bridge so the engine can use it.

From here, I’ll use the alphabet as the scale: C is the obvious commercial reach, J is the non-obvious, and Q is the most ambitious leap a brand can sensibly make.

Two examples from my own positioning make the difference concrete.

A + B → C

  • A: I coined answer engine optimization in 2017. B: I run a brand engineering company.
  • The obvious conclusion (C) the engine produces unprompted: my work is connected to AEO implementation. True, useful, but close to the facts, and the engines will get that right without my help.

A + B → J

  • From the same A and B, reaching further. J: I’m the practitioner most likely to have learned what works and what doesn’t from a decade of operational data nobody else has. 
  • Same starting facts, completely different commercial outcome, and the engine doesn’t make this leap on its own. It needs me to build the bridge for it.

That second move — from A and B to J — is strategic claim bridging: selecting which non-obvious J benefits the brand from the space of derivable conclusions, and then constructing the logical connection from accepted facts to that chosen J so the engine transmits it as fact rather than as the brand’s opinion of itself. 

Two operations packed into one move: the strategic part is choosing J, and the bridging part is making the inference watertight.

AI won’t choose what’s best for your brand

AI doesn’t choose the J that’s good for your brand. You do. That choice, and the bridge that proves it, is the work AI has no commercial stake in, and a future (more capable) AI without your stake just produces a more sophisticated version of the same problem.

Whether AI can be creative is contested ground. The narrower claim holds regardless: even when AI produces a novel-looking output, it has no commercial intent guiding which J to derive. From the same A and B, an AI could just as easily produce a damaging J as a beneficial J. It has no skin in your commercial game.

A creative marketer does both things at once: reaches imaginatively to a non-obvious J, and chooses the J that serves the brand. That’s the move AI engines can’t reach, and it’s why the frame has to come from someone placing the information online (the brand, a client, or an independent source).

The disposition that lets you see this work is what I’ve been calling “empathy for the machine,” a phrase I started using in client consulting around 2011-2012 (originally as “empathy for the beast,” retired once I got more serious about the business side of digital marketing), and first published formally in 2019

It’s the discipline of stepping outside your own perspective to see what the machine actually struggles with. That advice applies to anything in SEO/AAO — in this case, specifically to when it grounds, attributes, and synthesizes claims about your brand.

Unfortunately, brands all too often produce material aimed at human readers and assume the machine will figure out the rest. With a little empathy for the machine, brands design material the machine can use as its own interpretation (feed the beast).

This produces three different levels of brand-AI communication, each one building on the previous. 

Levels 1 and 2 are the foundations every brand needs in place, and Level 3 is where framing enters, and what this article is designed to change your thinking.

Level 1: Scattered proof of claims

Proof exists, but there’s nothing linking it to the claim. This is where most brands sit, and it leaves the engine to perform inference over whatever it can find. 

The brand publishes Claim A on its website. Proof Z exists somewhere else: a conference program, an industry database, a Wikipedia citation, and a trade publication from four years ago. The brand assumes the engine will connect the two.

To connect them, the engine has to perform inference. Can it derive the conclusion that this brand is credible for this claim, given scattered premises across different domains, formats, and varying source authority?

There’s no copy stating the connection, no hyperlinks pointing from claim to proof, and no schema encoding the relationship.

That depends almost entirely on how confidently the machine already understands the entity, and that runs on three sub-levels.

If the machine has no confident understanding of the brand, and the proof isn’t explicitly linked, no connection happens. The proof might as well not exist.

If the machine has no confident understanding of the brand, but the proof is explicitly linked, the connection happens because the link does the work that the entity resolution couldn’t.

If the machine has a strong, confident understanding of the brand, the connection happens even without the link, because a well-resolved entity shortens the logical distance the machine has to traverse (linkless links, as I’ve called them). 

The link still adds confidence (more than one path always does), but it’s no longer load-bearing as the entity carries the work.

The implication runs through the rest of the pipeline. Entity clarity in the knowledge graph isn’t a nice-to-have sitting alongside content work. It’s the variable that decides whether your content work has to carry all the weight or almost none of it. 

Any proof that isn’t explicitly linked is missed at sub-level one, caught at sub-level two, and confidently embedded at sub-level three.

When entity understanding is weak, the result is familiar to anyone tracking AI visibility: a meritorious brand appears occasionally, and when it does, the wording is hedged, and the brand sits mid-to-low-pack. The engine did the best inference it could, and, being a responsible probability engine, it hedged. 

Worse, opportunities for inclusion are throttled across adjacent queries the fact should have pulled the brand into, because the fact was never connected to the proof that would have warranted the inclusion in the first place.

What happens when Level 1, scattered proof of claims, is done well? Brand X is infrequently mentioned, unconvincingly, as a provider of Y.

Level 2: Connected proof of claims

Here, the brand explicitly connects claim to proof through a combination of copy, hyperlinks, and schema. It also closes the inference gap by providing what the engine would otherwise have to figure out. 

The brand publishes Claim A and explicitly connects it to Proof Z, with the logical thread stated in copy, anchored by hyperlinks to the proof, and encoded in schema: a fact with a significant number of supporting pieces of evidence joined to it three ways, leaving nothing for the engine to infer.

Connected proof of claims is a spectrum, not a switch. At the low end, you’ve connected some of your proof, which already beats Level 1 because the engine no longer has to figure out the connections you’ve made, but it’s still figuring out the ones you haven’t. 

If your competition has connected more of theirs, you’re still losing the comparison on the proof you left scattered. At the high end, you’ve connected all of it: every claim joined to every piece of supporting evidence, nothing scattered, and nothing left for the engine to guess at.

Most brands sit somewhere between scattered and connected simply because they’ve connected only the most obvious proof, and the AI may well have already figured the obvious ones out for itself: the links don’t teach it anything it didn’t already know.

With connected proof of claims done comprehensively for a given claim, the engine has enough corroboration to back the brand confidently, and the claim becomes fact in the corpus. Confidence transfers cleanly because there’s nothing to guess at. 

Connected proof of claims is also a great weapon for a smaller brand competing with a bigger one: a specialist accounting firm with 50 pieces of proof, all explicitly connected to a specific positioning, beats a Big 4 with thousands of unconnected pieces on that specific positioning, because connection is what turns proof into substance that the engine can transmit.

What happens when Level 2, connected proof of claims, is done well? Brand X is frequently mentioned convincingly as a provider of Y.

Get the newsletter search marketers rely on.


Level 3: Framed proof of claims

This is where framing enters, and where strategic claim bridging earns its name. 

For each claim that matters, the brand publishes Claim A, connects the proof, and then does the thing the engine can’t do (and the audience is unlikely to do either, for that matter).

It reaches the non-obvious J that benefits the brand, and constructs the bridge from A and B to J in language the engine can transmit. Not merely “we are the leader in X, demonstrated by Y,” but the frame: 

  • Why Y matters for the specific problem this audience faces.
  • What Z signals about trust in this particular market.
  • How W translates to the outcome the prospect actually cares about at the moment of decision.

A frame is a logical inference from corroborated facts, where the brand chose where the inference would land. For example: 

  • “Jason Barnard coined answer engine optimization in 2017, made dated public predictions about how the field would unfold, and those predictions came true, his predictions about where the field is going next are credible.” 

Every component is verifiable independently, and every connection between components is logical. The J the bridge reaches to is the one I chose, not the J the engine would have generated unprompted.

One well-constructed frame makes one claim into fact in the AI’s voice. Run that across the claims that matter, and the cumulative weight is what shifts a brand from “frequently mentioned convincingly” to “almost always mentioned as the leading provider”: dominance is a stack of well-framed facts, not a single masterstroke.

The result: the AI doesn’t merely confirm, it enthuses. “Brand X leads in Y, and here is why that matters for your situation.” 

The engine transmits the frame wholesale, in the language you chose, to the audience you specified, with a reason to keep coming back. The machine didn’t generate the narrative; it relayed it warmly.

What happens when Level 3, framed proof of claims, is done well across the claims that matter? Brand X is almost always mentioned as the leading provider of Y, and dominates the space.

Each level builds on the previous: connected proof of claims requires scattered proof of claims connected, and framed proof of claims requires connected proof of claims bridged strategically.

Most brands are only halfway to framed proof of claims

The brands that think they’re at framed proof of claims are usually at framed proof of claims for humans, and scattered proof of claims for machines. Marketing and narrative work supplies frames to humans all the time, and plenty of brands do it well. 

What almost no brand does is supply frames the machine can use, and the gap between the two is where framed proof of claims is most powerful.

Some brands operate below even that and are effectively standing still: published facts at the surface, few proof connections, and no interpretive content the machine can use for any purpose. 

The signature objection from a standing still brand is the same in every consulting room: “We already do this, our website explains who we are.” The website does that. The website is doing zero work to help the machine with framing.

The cost of standing still isn’t visible until a model update or two down the line. Brands that think they’re at framed proof of claims are usually investing harder in the wrong layer (content), while the layer that matters (framing and, ideally, joining the dots) compounds for someone else. 

The gap widens every year. If you have content that doesn’t frame effectively or join the dots with links to proof, you’re leaking huge value, and pushing through connection and framing is the best return on past investment you can make right now: you’re doing the heavy lifting for the machines, and they’ll reward you for giving them this extremely valuable context on a plate.

Three structural conditions separate framed proof of claims from marketing-and-narrative-as-usual, and missing any one collapses the brand back to connected proof of claims or lower. 

The entity has to be well-established, well-resolved, and trusted, because a frame can’t anchor to a vague brand. The underlying proof has to be connected, because most brands have fluent marketing prose on top of scattered proof, which is scattered proof of claims with prettier wallpaper. 

The bridge itself has to be strictly logical, because machines read logic first and tone second, and a logically broken bridge fails, however well it’s written.

The better AI gets, the more framing matters

Smarter AI rewards better framing rather than replacing it, and the reason is the same selection pressure SEO practitioners have been operating under since the early 2000s. 

There’s a seductive and entirely wrong conclusion to draw from rapid improvement in AI reasoning: that engines will eventually figure out how to frame brands correctly without help. The opposite is true. The engine rewards the brand whose assets reduce its own workload for the same or better result.

Search engines reward sites that are easy to crawl, render, and classify. Knowledge Graphs reward entities that are easy to resolve. AI assistive engines reward content that is easy to ground, verify, and transmit confidently. Where the engine has to choose between two roughly equivalent candidates, the candidate that demands less computation, less inference, and less guesswork wins.

Framed proof of claims is that principle operating at the bridging layer. A more capable engine encountering this level has the bridge handed to it ready-made. It doesn’t have to figure out the frame, it transmits the bridge the brand supplied, fluently and confidently, with the engine’s full reasoning capability now amplifying rather than substituting for the framing work.

A more capable engine without a frame falls back to inference over scattered evidence, which is expensive, ambiguous, and produces hedged output. Every improvement in reasoning capability makes the hedging more detailed and the noncommittal language more sophisticated, but the underlying problem isn’t capability, it’s the absence of a frame to amplify. The engine is doing more work for a worse result, and that’s the exact failure mode the engine’s selection pressure is designed to penalize.

The gap between those two outcomes is the framing gap, and it widens with every generation. Brands implementing only connected proof of claims don’t lose ground in absolute terms, they lose ground relative to brands implementing Framed Proof of claims faster every year, because the engine increasingly rewards assets that let it deploy its growing capability productively rather than waste it on guessing and hedging. 

The selection pressure that rewarded fast websites in 1998, clean HTML in 2003, and structured data in 2015 rewards framed proof of claims now. The mechanism of gaining a competitive advantage by reducing costs for the AI for the same or better results hasn’t changed — and probably never will.

The framed proof of claims trajectory rises steeply and continues climbing. The connected proof of claims trajectory rises gently and flattens. The shaded area between the two lines is labeled the framing gap and visibly widens with each generation.

The bridge stays human

The bridge is human territory, and it stays human because it requires commercial intent specific to the brand that the engine doesn’t have. 

Everything the machine does well will get better: retrieval, connection, pattern extraction, and synthesis. None of that helps the brand whose evidence the machine can see but can’t bridge meaningfully to a beneficial conclusion.

Whether AI confirms your brand, overlooks it, or champions it comes down to one discipline: strategic claim bridging, claim by claim, fact by fact. It’s the last layer of brand-AI communication that won’t yield to automation, if it yields at all.


This is the 11th piece in my AI authority series. 

SEO isn’t just about being seen — it’s about being believed and chosen

28 April 2026 at 18:33
Seen believed chosen

Wil Reynolds, founder and CEO of Seer Interactive, is challenging SEOs to rethink what success looks like in a world increasingly shaped by AI.

In his SEO Week session, “SEO is a performance channel, GEO isn’t. How do you pivot?”, Reynolds said many marketers are focused on the wrong outcomes — and producing work that people don’t believe.

Marketing isn’t just about being seen

Reynolds opened by pushing back on the idea that visibility alone is the goal of marketing.

“Marketing was never just to be seen or be visible,” he said. “You had to turn that visibility into something — believing something about your brand… And then they ultimately have to choose you.”

He described a progression that marketers need to focus on: being seen, being believed and being chosen.

“It’s how you take your time with people, and turn them from seeing you, into believing something about you,” he said.

“I got the ranking, job finished,” he added. “Job’s not finished.”

Reynolds also questioned the value of surface-level success metrics.

“I got a lot more followers, but they don’t pay you,” he said.

Low-quality marketing is everywhere

Reynolds pointed to common marketing tactics — including automated outreach — as examples of work that doesn’t create value.

“That’s not marketing,” he said, referring to spam-like SMS messages.

Those tactics made him reflect on his own past work, he said.

“I started looking at the stuff that I used to do… was that really marketing?” he said.

“Some of us are strategists. Some of us are loopholists,” he said. “You’ve got to make a decision today.”

The industry is producing ‘zombie content’

Reynolds criticized the widespread use of scaled, templated content designed primarily to rank.

He used broad listicle-style pages as an example.

“Why would you write content saying best restaurants in Minnesota when nobody that’s a human looks for the best restaurant in Minnesota?” he said.

He described this type of content as “zombie content.”

“That’s what we do,” he said, describing how marketers repeat what already ranks instead of doing something different.

He also described how many marketers approach content creation.

“I’m going to look at the top 10 and look at what they did slightly wrong… and I’m only going to do it slightly better,” he said.

Short-term tactics vs. long-term brand building

Reynolds contrasted short-term SEO tactics with long-term brand building.

“Some people like to win in decades,” he said. “Other people like to win quarter to quarter.”

He described how many teams focus on immediate results.

“What works this quarter to get my boss off my back long enough so I can survive the next quarter?” he said.

That approach leads to work that people don’t actually want, he said.

“You will never produce a thing that anyone wants if you continue to play that,” he said.

SEO success doesn’t translate to AI visibility

Reynolds shared an example involving “ethical jeans” to show how SEO and AI results can differ.

One brand ranked well in Google without being known for ethical practices, while another brand that invested in ethical production ranked much lower.

In AI-generated answers, that outcome changed.

“If that worked, if it was the same, that brand would be showing up in AI models,” he said. “And they showed up in none.”

He connected this to credibility.

“Nobody believed them,” he said. “Nobody chose them.”

Visibility without belief doesn’t lead to outcomes

Visibility alone isn’t enough, Reynolds said.

“If you have all the visibility in the world and people don’t believe you or trust you, then you’re not going to get chosen,” he said.

Visibility is only part of the process, he said.

“This visibility is just an opportunity,” he said. “That’s all it is. … Iit is not the job to be done.”

What people say matters

Reynolds suggested looking at platforms like Reddit to understand how people actually talk about brands.

“Go to Reddit… look at all the brands,” he said. “You find out that humans don’t believe you. And they have to pay you for you to stay in business.

He contrasted that with how brands present themselves in content.

“Not only did they not think you’re number one — they don’t think you’re number 100,” he said.

The wrong metrics are being measured

Marketers often focus on metrics that are easy to track rather than meaningful, Reynolds said.

“We’re measuring the easy stuff to measure,” he said. “The real work is in the hard-to-measure stuff.”

He encouraged comparing visibility metrics with signals tied to outcomes.

“If your visibility is skyrocketing and your pipeline is flat, that’s bad,” he said.

Watching real users changes the picture

Reynolds described research his team conducted by observing real people using AI tools.

“When you actually watch people do the job… your eyes open so much wider,” he said.

One person typed four words, while another typed more than 100 words for the same task, he said.

He also noted that AI tools often suggest additional steps or actions beyond what users ask for, and people frequently accept those suggestions, he said.

Start with your brand

Marketers should focus on how their brand appears in AI-generated answers, especially for branded queries, Reynolds said.

“You spend all this money trying to get people to know your brand… and then you don’t want to make sure that answer’s right?” he said.

AI can shape your brand narrative

Reynolds shared an example where AI-generated responses surfaced incorrect information about his company.

“So now it’s showing up everywhere,” he said.

He described responding by publishing content to address the claim directly.

“If it’s false, then I’ve got to fight that,” he said.

There is too much content

“There’s too much content out there,” he said.

He described shifting his approach.

“I’m trying to become a curator,” he said.

Rethinking performance

Reynolds shared examples of how different traffic sources perform.

“My direct converts 1.5 times better than my SEO,” he said. “My social, five times better.”

A final question for marketers

Reynolds ended by asking marketers to rethink their priorities:

“Are you willing to sacrifice a little bit of this visibility game to be more believable?”

Why more content is no longer a reliable way to grow SEO

28 April 2026 at 18:00
Why more content is no longer a reliable way to grow SEO

One of the most dependable ways to grow organic visibility was to publish more content. Expanding into the long tail and creating pages around different variations of a topic often led to steady traffic growth.

Many SEO teams still operate with this mindset. Content calendars are built around search volume targets, and growth is often equated with how much new content is produced. The problem is the results no longer reflect the effort.

In many cases, adding more pages doesn’t lead to increased visibility and can even dilute overall performance. Large content libraries are harder to maintain, compete internally, and often result in fewer pages surfacing in search results.

The challenge is no longer producing more content, but understanding why much of it fails to contribute to visibility.

Why content volume worked for SEO

For a long time, increasing content volume was a rational and effective strategy. Search engines relied heavily on keyword matching and topical coverage, which meant expanding into the long tail created more opportunities to capture demand.

Competition was also significantly lower, and many queries had limited high-quality results, so publishing across a wide range of keyword variations often led to quick visibility gains. In this environment, covering more topics translated directly into increased traffic.

Publishing frequency also helped strengthen domain authority. Sites that consistently added new content signaled freshness and relevance, which improved their ability to compete in search results.

This approach was further amplified by programmatic SEO. By creating scalable templates and targeting large keyword sets, companies generated thousands of pages and captured traffic at scale.

Most importantly, this strategy worked because it aligned with how search engines evaluated content at the time. Expanding coverage increased the likelihood of ranking, and more pages meant more opportunities to be discovered.

However, the conditions that made this approach effective have changed. As search ecosystems have evolved and competition has increased, the relationship between content volume and visibility has become less predictable.

Dig deeper: Content marketing in an AI era: From SEO volume to brand fame

Why this model is breaking down

Content saturation

Most commercially relevant topics now have dozens of established pages competing for the same queries, many with years of accumulated links and behavioral data. 

A new page enters this environment at a disadvantage because the keyword spaces it targets are already consolidated around results with existing authority and signal history.

Diminishing returns

As sites expand into adjacent keyword variations, search engines increasingly route similar queries to the same URL rather than distributing traffic across multiple pages. 

This shows up in Google Search Console as two or three URLs splitting impressions on identical queries — neither ranking strongly because neither has consolidated authority. The intent overlap that content teams treat as coverage, Google treats as redundancy.

Changes in search experience

AI Overviews now appear across a significant and growing share of informational queries. Google has confirmed continued expansion of the feature across search types and markets. Informational content is the most affected by this shift, and it’s also the type most volume strategies produce. 

A site with a large number of blog articles is therefore more exposed than one focused on a smaller set of transactional pages. More ranked pages don’t produce proportional traffic when an increasing share of visible positions no longer generate a click.

Indexing limits

Google’s budget documentation states directly that low-value URLs drain crawl activity away from pages that matter. At scale, thin or redundant content is deprioritized — meaning a significant percentage of a site’s published pages may never meaningfully enter search competition regardless of how much continues to be added.

Dig deeper: The authority era: How AI is reshaping what ranks in search

The hidden mechanics behind content saturation

What’s less understood is how content libraries behave at scale. These are system-level problems that compound over time and are difficult to reverse.

Content debt

Every page published creates an ongoing obligation. It needs to be monitored for ranking decay, updated when information changes, evaluated periodically for pruning or consolidation, and factored into crawl allocation. These costs are rarely accounted for at the point of creation.

At low volumes, this is manageable. At scale, it becomes a compounding liability. A site with 2,000 articles isn’t sitting on 2,000 assets, it’s managing 2,000 maintenance commitments that depreciate at different rates. 

Editorial resources that could strengthen existing high-performing pages are instead absorbed by keeping a growing library from becoming a liability.

The true cost of a volume-driven content strategy only becomes visible 18 to 24 months after the investment, when maintenance demands begin to outpace the capacity to meet them.

Crawl inefficiency and cannibalization

Google allocates a finite crawl budget to each domain. When a site scales content volume without proportional gains in quality or authority, Googlebot distributes that budget across a larger number of pages, many of which offer limited signal value. The result is that high-value pages are crawled less frequently, indexed less reliably, and are slower to reflect updates.

This creates a compounding problem for sites with important transactional or evergreen pages that depend on frequent re-crawling to stay current and competitive. Beyond crawl distribution, similar pages targeting overlapping intent compete for the same ranking positions internally. 

Search engines consolidate these signals rather than rewarding each page individually, meaning two pages targeting near-identical queries often perform worse combined than one authoritative page targeting both would perform alone.

Topical authority dilution

Search engines evaluate whether a site is a genuinely deep and trustworthy resource within a defined topic space. Expanding into a wide range of loosely related subtopics can erode this signal rather than strengthen it.

A site with 40 tightly interconnected, substantive pieces on a specific topic will consistently outperform one with 400 surface-level articles spread across adjacent themes. The depth and coherence of coverage within a defined area are what build the authority signal that drives durable rankings. 

Pursuing breadth at the expense of depth fragments that signal, making it harder for search engines to assign clear expertise to the domain on any individual topic, even the ones the site knows best.

Weak content and behavioral signals

Search engines use behavioral data such as dwell time, return-to-search rates, and click-through rates as quality signals at both the page and domain levels. 

When a site publishes high volumes of content that users engage with poorly, those signals accumulate and begin to affect how search engines evaluate the domain as a whole. This creates a negative reinforcement loop that’s difficult to detect and slow to reverse. 

Weak pages actively contribute to lower domain-level quality assessments, affecting the performance of pages that would otherwise rank well. More mediocre content compounds. Each low-engagement publish incrementally reduces the baseline trust that search engines extend to the domain’s better work.

Get the newsletter search marketers rely on.


The rise of citation-driven visibility

The goal of SEO has traditionally been to rank. Increasingly, the more valuable outcome is to be cited or referenced in AI-generated summaries, pulled into knowledge panels, or sourced by other publishers as a primary reference. These two outcomes require fundamentally different content strategies.

LLMs and AI Overviews are selective about which sources they draw from. The selection is weighted toward pages with strong E-E-A-T signals, high specificity, and clear authoritativeness within a defined domain. 

A site that has published hundreds of generic articles covering a topic broadly is less likely to be treated as a primary source than a site that has published fewer, more definitive pieces with clear depth and original perspective. 

Volume doesn’t increase citation probability — it may actively reduce it by signaling that the domain is a generalist content producer rather than a reliable primary reference.

The long tail is saturated

The accessible long tail that drove content volume strategies for the better part of a decade no longer exists in the same form. Between 2010 and 2020, there were genuinely underserved keyword opportunities across most industries. 

Today, in most commercial verticals, every remotely valuable query has multiple established pages competing for it, especially from high-authority domains with years of accumulated signals.

New content entering this environment doesn’t find open space. It enters a war of attrition against incumbents with advantages it can’t easily overcome. The marginal SEO return on a new article targeting a long-tail keyword is a fraction of what it was five years ago. 

The economics only justify creation when there’s a genuinely differentiated angle, a proprietary data point, or a perspective that exists on your page that other pages can’t offer. A keyword existing is no longer a sufficient reason to publish.

At scale, these factors turn content growth into diminishing returns rather than compounding gains. The library becomes harder to maintain, harder for search engines to evaluate clearly, and harder to extract meaningful visibility from — regardless of how much is added to it.

Dig deeper: How to keep your content fresh in the age of AI

How to shift from content volume to impact

The implication is to change what publishing is for.

Volume targets made sense when more pages meant more opportunities. In the current environment, they measure the wrong thing. The more useful question isn’t how much content a team is producing, but how much of what already exists is actively contributing to visibility, and what is quietly working against it.

For most sites, that audit reveals the same pattern. A relatively small number of pages generate the majority of organic traffic. A larger number generates little to none, and a significant portion actively drains crawl allocation, fragments topical authority, or dilutes the behavioral signals that stronger pages depend on.

You need to move from expansion to consolidation. Existing pages that cover overlapping intent are stronger merged than competing. Thin pages that rank for nothing and engage no one are more valuable removed than retained. 

The energy going into producing new content at volume is often better spent deepening the pages that already have authority and signal history behind them.

New content earns its place when it: 

  • Addresses something genuinely unaddressed.
  • Offers a perspective that existing pages can’t.
  • Targets an intent the site currently lacks. 

In practice, this means retiring a few default assumptions:

  • That publishing for every keyword variation is coverage.
  • That indexing is the same as performance.
  • That output volume is a proxy for strategic progress. 

None of these were ever true measures of content effectiveness. They were convenient ones.

Dig deeper: Content strategy in 2026: What actually changed (and what didn’t)

A new model for content-driven growth

The replacement for volume isn’t simply better content. It’s a different definition of what content is trying to achieve.

Depth over breadth

Focus coverage on a smaller number of topics and develop them thoroughly. A single piece that addresses a topic with specificity, original perspective, and clear authorial expertise will outperform multiple pieces covering adjacent variations of the same theme. 

Depth is what builds authority signals, drives engagement, and increases citation potential. Prioritize what the site can say with the most credibility.

Distribution as a multiplier

Allocate more effort to distribution. Publishing less creates capacity to deliver strong content to the right audiences. Distribution is a core part of SEO performance in a citation-driven environment.

Being citation-worthy

Create content that can serve as a primary source. Focus on clear points of view, verifiable expertise, and specific insights that other pages can’t replicate.

The goal is to be referenced in AI-generated summaries, cited by other publishers, and included in the knowledge systems search engines rely on.

Dig deeper: Content alone isn’t enough: Why SEO now requires distribution

The uncomfortable truth

Sites that rely on frequency and broad coverage are being outperformed by sites that are clearly authoritative on a defined topic, consistently useful to a specific audience, and structured in a way that search systems can evaluate with confidence.

Prioritize depth, clarity of expertise, and consistency within a focused topic area. Treat each published page as a long-term asset that requires ongoing maintenance, evaluation, and improvement.

The content factory model is no longer effective. The approach that replaces it requires more effort, stronger editorial standards, and a higher bar for what gets published.

How to measure paid social’s impact on PPC

28 April 2026 at 17:00
How to measure paid social’s impact on PPC

If your paid social campaigns aren’t converting, you may be undervaluing their impact. Your brand’s exposure on social media can influence other parts of your marketing that platform metrics don’t capture.

Here’s how to design and measure a test to understand how paid social influences your other marketing channels, including PPC.

Step 1: Determine your hypothesis

Start with what you want to learn, then define a hypothesis you can realistically evaluate with your data.

For example, this is a common hypothesis for measuring paid search lift from social traffic:

  • Search lift hypothesis: Increasing spend on social media will increase brand search volume and overall PPC CTRs.
  • Logic: 
    • Social ads build brand awareness. As more people become familiar with our brand, they will search for it more often when making research and purchase decisions. 
    • As more people are exposed to our brand, they will increasingly click on our PPC ads regardless of their search term (i.e., increasing non-brand and brand CTRs).
    • People exposed multiple times to our brand will have a higher trust factor in our products, and therefore, our conversion rates will increase. 
  • Measurement: 
    • Impression and click volume for our branded terms.
    • CTR changes for brand and non-brand terms.
    • Conversion rate changes for brand and non-brand terms. 

Your hypothesis could have a different scope, such as measuring paid and organic lift from social spend or an increase in direct traffic. 

Step 2: The test

The next step is to set up the test parameters. Generally, measuring before and after a change is a mistake, as seasonality or other factors can affect your test results.

The most common test setup is a geographic split. In this test, we’ll increase social spend for only a set of geographies. Then we’ll examine the PPC data for the geographies where we ran the test and compare them with areas where we did not.

As you choose geographies, you’ll want to control for other variables that may affect your test. Here are some common issues that companies have run into and need to control for in their tests and measurements:

  • You sponsor a sports team, and they’re playing during your test.
    • If the game is regionally televised, this can dramatically affect your test results.
  • You’re running TV commercials in only certain regions.
  • You choose experimental geographies with many out-of-region commuters, such as New York City, and include New Jersey and Connecticut in your control group.
    • In these instances, grouping a region and its surrounding commuter areas together, and placing other cities with similar characteristics, such as Chicago and Philadelphia, in a different group, can help balance these tests. (Note: in this example, we’re splitting New Jersey in half.)
  • Seasonal or local events. Large conferences, festivals, or major weather events can affect your data.

Your control and experimental groups should be statistically similar across factors such as income levels, and urban versus rural regions.

As you set up and measure your test, consider your budget. If you increase social spend and expect higher clicks and conversions for your PPC campaigns, ensure you have the budget to capture the increased demand.

Examine your impression share and impression share lost to budget before and after the test to ensure budget limits won’t severely impact your results.

Dig deeper: Why PPC tests in 2026 call for nuance, not winners

Step 3: The measurement

Measurement can go from very simple to extremely complex.

At a simple level, you can compare platform data to see how your data changed. In this case, a Google Ads report shows how pausing social spending and influencer campaigns across all social platforms (TikTok, LinkedIn, Facebook, YouTube, etc.) affects performance.

For this test, pausing social spending yielded mixed results for conversion rates. As brand searches decreased, conversion rates in some regions increased, while in others they fell.

However, what was consistent was a dramatic drop in conversions.

You can get more sophisticated in your testing. Depending on your analytics setup, some companies want to measure touchpoint differences for their conversions. Others will want to measure overlap rates between social and paid search visitors, or examine attribution touchpoints and models.

Before you set up your test, ensure you have the measurement capabilities needed to understand and interpret the results.

Get the newsletter search marketers rely on.


Step 4: Evaluation beyond the test criteria

As you run various tests, you want to measure the results against your hypothesis. However, it’s useful to list other variables worth evaluating beyond your test criteria.

This is where search consoles, analytics tools, CRM, internal data, and even the paid and organic report can come into play.

In one example, a company was running a test to see whether pausing several advertising channels, from social media to TV ads, would dramatically change its brand search volume. They hypothesized that their brand was so well known in the marketplace that they could cut back on several forms of brand advertising and reallocate that budget to other channels and non-brand advertising.

While the simple paid and organic report in Google Ads won’t tell you the full story about in-store revenue and direct traffic changes, it can serve as a signal to form an overall picture of a very complex test.

They had recently launched a new product line, and that line continued to see a large increase in traffic during the test. However, their most common brand terms saw significant declines from the test. This was a year-over-year comparison across a set of geographies, rather than a period-to-period comparison, to help correct for the increase in holiday traffic that would have occurred during the previous period.

The results were by far the most dramatic I’ve ever seen in this type of test, to the point it was clear other variables had to be in play that could affect the test.

This takes you to the sniff test. Rely on your experience with data to make common sense adjustments. If you look at the data and it just doesn’t seem right, ask yourself whether this makes sense, if it’s a math quirk (common with low data), or if other unforeseen variables are in play.

In this example, no one believed the results should be this dramatic. The company stopped running the test and began an internal evaluation of its organic presence, including Google’s recent updates, changes to AI Overviews, AI engagement, and other factors affecting its web presence beyond its usual marketing channels.

Dig deeper: Are your PPC ads still authentic in the age of AI creative?

What to do with your social impact tests

The test setup is simple:

  • Determine your hypothesis.
  • Decide how you will test. The easiest setup is a geographic split.
  • Make sure you can measure the results.
  • Launch the tests.
  • Evaluate the metrics for your hypothesis.
  • Examine other metrics for insight or additional testing ideas.

For some companies, Facebook and other social channels are their top conversion channels, and these tests won’t be applicable. For others, social media advertising results often look poor when evaluated in isolation.

In these examples, the companies were already running many social media campaigns, so the test was to reduce social media spend. If you don’t run much social media, your test will be to increase your social media spend to see how it affects your data.

I’ve seen a lot of these tests, and the results are highly inconsistent across companies. Many companies will increase their social media spend and see little change in their data. Others will increase their spend and see a nice lift in overall performance. These are tests you need to run yourself, as your results will vary by company.

Running geographic split tests in your social media campaigns and then measuring the results on paid or organic search traffic can give you insights into how to leverage social media campaigns for other marketing channels.

YouTube testing new search experience, Ask YouTube

28 April 2026 at 16:17

Google announced they are testing a new “conversational search experience to complement how you already search on YouTube.” It is called “Ask YouTube” and it lets you “dive deeper into the topics you’re curious about in a more interactive way,” Dave from YouTube wrote.

What it looks like. Here is a GIF of it in action:

How can I try it. If you want to try it out, you can go to youtube.com/new and try to opt into it.

This experiment is currently available for YouTube Premium members 18+ in the US who opt-in. Google is working on expanding the experiment to non-Premium users in the future.

What it does. Dave from YouTube posted this example:

“If you’re in the experiment, you can try it out by selecting “Ask YouTube” in the search bar. For example, you can ask for help planning a 3-day road trip from San Francisco to Santa Barbara, and you’ll get a structured, step-by-step itinerary instead of a list of videos. The response will bring together a new mix of long-form videos, Shorts, and informative text featuring local tips and must-see stops. You can ask follow-up questions like, “where can I find good coffee?” to explore local spots along your route. We’ll surface videos and relevant video segments, accompanied by their titles and channel details, to make it easy to discover new creators and jump into the most helpful content from your search.”

Why we care. AI search is creeping into every search interface across Google’s properties. YouTube is no exception. Expect more and more AI search experiences in more Google surfaces and expect them to change and adapt over time.

You can find more coverage of this across Techmeme.

New to PPC? 7 tips to build skills and confidence fast

28 April 2026 at 16:00
New to PPC? 7 tips to build skills and confidence fast

Understanding the ins and outs of paid media can seem like an overwhelming process when you’re first entering the field. As AI has rapidly changed ad platforms in recent years, keeping up can feel challenging.

Thankfully, you’re not alone. You’re part of a supportive industry with a wealth of content and knowledge to share. Here are seven tips to help you learn and become a more confident PPC manager.

1. Be curious

Curiosity is foundational to growth in PPC. You’ll learn best by taking initiative to understand ad platforms, how campaigns are structured, and what options are available on the backend. Of course, be careful about tweaking settings you’re not familiar with, but don’t be afraid to dig in on your own.

If you’re part of a team, ask your colleagues why they use a particular setup. If you’re not familiar with a platform and have a team member who frequently uses it, ask if they can walk you through it.

2. Absorb content and find community

There are countless industry professionals producing content to teach PPC. Whether you learn best from reading, listening to podcasts, or watching videos, you’ll find options that fit your style. Looking up the authors of articles on this site is a great starting point to build a list to follow.

Block out time in your schedule for education. Even setting aside a couple of hours a week helps you gain perspective from others in the industry and keep up with constant platform updates.

The PPC industry has long been known for its welcoming, supportive community. Seek out individuals and organizations who are actively sharing, and don’t be afraid to engage with them on social media. Conferences are also a great way to network with other PPC professionals and sometimes discuss their approaches in a more informal setting.

A brief word of caution: Vet recommendations you see from others against your own experience in ad accounts. Just because a “best practice” worked for one account doesn’t mean it’ll work for every account. Depending on the tactic, you may want to test it as an experiment to measure impact, or compare results before and after.

Dig deeper: What 10 years of PPC testing reveals about breaking best practices

3. Take industry certifications with a grain of salt

While ad platform certifications can serve as a starting point for demonstrating basic functionality, be cautious about relying on them as the end-all proof of PPC expertise.

Certifications often lean heavily on platform-recommended best practices, which may conflict with tactics that align with a brand’s goals. Academic knowledge can’t match the insight gained from practical, hands-on experience in accounts.

4. Don’t chase what’s new and shiny

While I’d encourage staying aware of ad platform updates and current tactics, I’d discourage implementing a new campaign type or expanding into a new platform just because it’s new. Make sure you have sufficient budget and a clear reason to test.

Additionally, avoid making adjustments without a rationale. If campaigns are performing and driving qualified leads or sales, keeping the status quo may be best.

Basic marketing principles still apply, such as knowing your target audience, addressing their problem with a solution, and presenting a clear call to action. Focus on aligning your channel choices with these goals, and the rest will follow.

Dig deeper: 10 keys to a successful PPC career in the AI age

Get the newsletter search marketers rely on.


5. Translate jargon for stakeholders

As you become more embedded in PPC, you may naturally use industry terms and acronyms such as CTR, CPC, ROAS, and CPA. However, these metrics are often meaningless to stakeholders who aren’t immersed in your world. One of the most vital skills for a paid media professional is translating abstract metrics into language that connects with what stakeholders care about.

For instance, I often default to “conversions,” even though the term can be ambiguous in reports. Referencing the actual action being tracked (such as account open, form fill, or purchase) is more concrete and ties directly to what stakeholders are tasked with driving.

6. Use AI, but don’t neglect the human touch

AI is an inevitable part of a future-forward career, and ignoring it will be detrimental to career development. However, don’t lose the human oversight that sets a seasoned PPC practitioner apart.

When writing ad copy, LLMs can offer a strong starting point and help refine wording. But don’t rely on AI to produce all your copy, as it may pull irrelevant content from your site (or elsewhere), and may not reflect your brand’s voice and perspective. Also, learn where AI can save time on “busy work” tasks, such as reviewing search terms and placements for exclusions, while still reviewing the output for accuracy.

While most ad platforms default to automated campaign setups and encourage a hands-off approach, a standout PPC manager understands the levers they can pull to maintain control when needed. Examples include:

  • Setting target bids or cost caps.
  • Excluding irrelevant keywords, placements, and audiences.
  • Pinning headlines and descriptions in responsive search ads.
  • Restricting geographic targeting to avoid unwanted locations.
  • Tailoring creative to specific demographics.

Dig deeper: The new PPC playbook: From media buyer to profit engineer

7. Don’t change things for the sake of showing activity

One common temptation for both new and seasoned paid media practitioners is to make changes just to appear busy. The motivation may be valid, as you want to prove to your client or boss that you’re attentive to PPC account management.

However, particularly with campaigns that rely heavily on data to drive automated bidding, too many changes in a short period are often detrimental. Be sure to allow for data significance and enough time before pausing ads and keywords or tweaking bid targets.

If you can show positive performance trends and provide readouts on which campaigns and channels are driving those results, you can validate your decisions to take or not take action when presenting to stakeholders.

Keep learning, start sharing

Becoming a confident PPC manager requires mastering a blend of technical, interpersonal, and marketing skills. As you build your knowledge, look for opportunities to share what you’re learning with peers. It’s one of the fastest ways to reinforce what you know and keep improving.

Dig deeper: 7 power moves to accelerate your PPC career

Where PPC and SEO teams lose control in branded search by Bluepear

28 April 2026 at 15:00

Branded search is often treated as predictable and easy to manage. In practice, it isn’t.

PPC teams see rising CPC on brand terms. SEO teams see declining branded CTR, even when rankings hold. These issues are usually investigated separately, with different dashboards, hypotheses, and fixes.

Both signals often stem from changes within a single SERP. What look like two separate problems are, in reality, one shared environment reacting to shifts in competition and visibility.

The issue isn’t a lack of data. Most teams already have basic reports and brand monitoring tools, including PPC and SEO platforms. The problem is how the data is used. 

To understand what’s happening in branded search, teams must manually piece signals together. This takes time, doesn’t scale, and delays decisions.

Here’s why that fragmentation is harmful and what to do about it.

What’s actually happening in branded search

Branded search is often described in terms of channels — paid and organic. For users, that distinction doesn’t exist.

A single SERP brings together multiple layers:

  • PPC ads 
  • Competitor ads or comparison pages
  • Organic results, including brand-owned pages
  • Affiliate listings promoting the same brand
  • Review platforms and aggregators 

All of these elements appear at once, within the same decision-making space.

From a SERP analysis perspective, this isn’t a set of isolated placements. It’s a dynamic environment where each element influences the others. A competitor ad above your organic result can reduce CTR. An affiliate listing can compete with your paid campaign. A review page can shift user intent before a click.

In practice, this creates a mismatch. 

For users, branded search is a single page. Inside the company, it’s split across workflows and handled by different functions.

PPC focuses on bids and efficiency. SEO focuses on rankings and organic traffic. Affiliate activity is often tracked separately, if at all. Competitor tracking may exist, but usually within a single channel. The result is a fragmented view of what is, in practice, a shared space.

Understanding what’s happening in branded search often requires manual effort. The data is there, but building a complete, up-to-date view of the SERP on a regular basis is time-consuming and hard to scale. That makes it difficult to understand how these elements interact — and even harder to respond to changes as they happen.

What PPC teams see (and often miss)

From a PPC perspective, teams focus on these signals:

  • Brand CPC starts to rise.
  • More players appear in the auction.
  • Branded campaigns become less efficient over time.

At first glance, this suggests increased competition. The typical response is to adjust bids, defend impression share, or refine targeting. All of it makes sense within paid media.

But this is where context changes everything.

What PPC teams don’t always see is who’s driving that competition. 

Not every new entrant in the auction is a direct competitor. Often, it’s affiliate activity — partners bidding on branded terms outside agreed-upon rules. Without deeper competitor tracking, these cases can look identical while requiring different actions.

There’s also the organic layer. Changes in SERP structure — more ads, different layouts, stronger third-party rankings — can directly affect paid performance. Even if the campaign setup stays the same, the environment shifts. Without ongoing SERP analysis, these changes are easy to miss.

In many cases, brands aren’t just competing with others — they’re competing with themselves. Over 40% of advertised pages already rank #1 organically (Ahrefs, 2025).

PPC teams rarely see the full page in context. They see auction data, metrics, and reports — but not always how their ads appear alongside organic results, affiliates, and other placements in real time.

But beyond missing context, there’s a more practical limitation.

Ad platform reporting rarely explains what changed. It shows performance shifts — but not how the SERP looked to users, who appeared alongside the ad, or how placements were arranged.

This creates a gap.

Competitor tracking without context doesn’t explain the situation — it only signals change. Without broader SERP-level brand monitoring, PPC teams often optimize on partial visibility, reacting to symptoms while the root cause must be reconstructed manually.

What SEO teams see (and often miss)

From the SEO side, branded search issues tend to surface differently.

The most common signals look like this:

  • Branded CTR starts to decline.
  • Rankings remain stable, often still in top positions.
  • SERP appearance shifts — new elements, richer features, or different page layouts.

On the surface, it looks like an SEO problem. The natural response is to review snippets, adjust metadata, or check for technical or content issues.

But in many cases, performance drops aren’t driven solely by SEO factors.

SEO teams generally know that paid activity, competitors, and affiliates can influence branded search. The challenge isn’t awareness — it’s consistent visibility over time.

To understand what changed, teams need to see how the SERP looked at a specific moment:

  • Which ads appeared and where.
  • Whether competitors or affiliates were present.
  • How organic results were positioned in context.

This isn’t what standard SEO workflows are built for. Teams often have to manually check results, compare snapshots across tools, or rely on incomplete data.

Then there’s the SERP itself. Modern branded SERPs aren’t static. Layout changes, added modules, and mixed result types can significantly affect click behavior.

Without consistent SERP analysis, it’s hard to isolate the cause. As a result, SEO teams may keep optimizing — and see no stable results.

Why PPC and SEO issues are actually connected

At a glance, PPC and SEO issues in branded search may look unrelated — different metrics, dashboards, and teams. But when you look at the SERP as a whole, the connection is hard to ignore.

Studies show this overlap isn’t an edge case. Nearly 38% of websites advertise on keywords where they already rank in the top 10 organically (Ahrefs, 2025). In branded search, the overlap is even higher.

That means both channels operate in the same environment — and compete for the same user attention.

Changes within that environment rarely affect just one side:

  • Increased ad presence can push organic listings lower or draw clicks away.
  • Aggressive bidding (from competitors or affiliates) can raise CPC while also reducing organic search visibility.
  • New entrants in the SERP can affect both paid efficiency and organic CTR simultaneously.

In this context, it’s not unusual for PPC performance to decline while SEO metrics shift in parallel. These aren’t isolated issues — they’re different reflections of the same underlying change. Yet they’re rarely analyzed together.

The real problem isn’t visibility — it’s fragmentation.

Most teams already have access to data. Specialized tools make SERP analysis, competitor tracking, and brand monitoring possible. The limitation isn’t what can be seen, but how it’s used.

PPC and SEO operate in separate systems — different platforms and reporting environments, KPIs, and workflows. To understand what changed in branded search, teams must align manually by comparing reports, checking SERPs, validating assumptions, and sharing findings across functions.

As a result, insights are delayed, alignment lags behind SERP changes, and decisions are made with incomplete or outdated context.

How to improve branded search performance

Most teams don’t miss the signals — a spike in CPC, a drop in CTR, unexpected competitors in the auction. These changes rarely go unnoticed. The challenge comes next: confirming what happened and deciding how to respond.

This is where branded search performance slows. Teams dig through separate reports, trying to reconstruct what the SERP looked like at a specific moment. By the time the picture is clear — if it ever is — the window to react has already passed.

Improving performance here isn’t about adding more data. It’s about changing how it’s collected and used. 

With the right setup, SERP analysis becomes continuous instead of manual. Changes in branded search are captured automatically, including competitor and affiliate activity that might otherwise require manual checks, post-fact validation, or go unnoticed.

Tools for branded search monitoring such as Bluepear provide: 

  • Unified look on SERP in a specific moment.
  • Automated alerts when meaningful changes occur.
  • Pre-collected, timestamped evidence that removes the need to manually gather screenshots or reconstruct past states.

Instead of spending time collecting screenshots, comparing reports, and reconstructing what happened, the information is already structured.

This shifts the process from reactive to operational. Instead of investigating issues after the fact, teams receive a clear signal or a complete case.

This creates a reliable record of what actually happened:

  • When a new player entered the SERP.
  • How placements shifted over time.
  • Where potential violations or conflicts appeared.

Instead of scattered evidence and manual reconstruction, teams get structured, ready-to-use context.

Reporting becomes simpler. Insights can be shared across PPC, SEO, and affiliate teams without rebuilding context each time, reducing internal alignment time. Most importantly, decisions can be made faster.

With Bluepear, brand monitoring and competitor tracking become continuous. Teams receive structured signals instead of raw fragments and can act without rebuilding the situation from scratch.

To see how Bluepear can improve your workflow, create an account and start your free trial.

Final takeaways

PPC and SEO teams don’t lack data — they interpret different signals from the same SERP. But these signals are connected. They’re shaped by the same changes in the search environment, even if they appear in different reports.

When SERP analysis is fragmented, it’s harder to see the full picture — and even harder to act quickly.

What makes the difference is not more data, but better coordination:

  • Continuous brand monitoring instead of occasional checks.
  • Shared visibility across PPC, SEO, and affiliate teams.
  • A consistent view of the SERP, not separate channel reports.

When branded search is managed holistically, teams don’t just react to performance changes — they understand what drives them and respond with clarity.

To simplify how your team tracks and responds to branded search changes, start using Bluepear to automate monitoring, capture SERP changes, and centralize evidence in one place.

Ginny Marvin on AI in search, PPC trends, and Google Ads evolution

28 April 2026 at 01:46

Ginny Marvin didn’t get into PPC because she had a grand plan.

She got into it because she was ready to start again.

After years working in print publishing and ad sales marketing, Marvin found herself at a career pivot point. A startup magazine she had helped launch folded, and she decided it was time to move fully into digital.

That meant going from marketing director to entry-level applicant.

  • “I don’t know what I’m doing, so I’ll start from the beginning,” she recalled.

That reset eventually led her into search marketing, Search Engine Land, and later Google, where she is now Google Ads Liaison.

In this interview, Marvin looks back at how paid search has changed, what marketers still misunderstand, and why the next phase of search will reward curiosity more than control.

PPC clicked faster than SEO

Marvin started on the SEO side at a small agency.

Then the paid search manager went on holiday.

She took over the campaigns temporarily — and immediately saw the appeal.

Coming from print, where measurement was slow or sometimes impossible, PPC felt almost instant. You could launch, spend, measure and see action quickly.

That speed changed everything.

For Marvin, PPC made the connection between marketing activity and business results much clearer than SEO did at the time.

Google won by moving faster

When Marvin entered the industry, Google wasn’t the only serious search player.

Yahoo was still a major force, and Microsoft was part of the mix. But over time, Google pulled ahead.

Marvin believes the difference was focus.

Google kept improving the product, launching new features and iterating faster than competitors. It became increasingly clear that Google was building around advertiser needs and pushing the industry forward.

Early PPC was painfully manual

Today’s PPC marketers may complain about manual work, but the early days were on another level.

Campaigns were built around huge keyword lists, endless permutations and highly granular structures. Advertisers spent hours creating keyword combinations and negative keyword lists.

It gave marketers a sense of control, but it also forced them to build campaigns around how the platform worked — not necessarily how the business worked.

That, Marvin said, is one of the biggest changes in paid search: campaigns now start more naturally with goals.

Search Engine Land became the industry’s newsroom

When Search Engine Land launched, Marvin was still early in her search career.

But it quickly became the place people went for search news, updates and expert analysis.

What made it valuable wasn’t just the reporting. It was the mix of fast news, contributed columns and practical insight from people doing the work.

For Marvin, Search Engine Land played a major role in professional growth across the industry because it made knowledge easier to share.

The search community has always been different

One thing Marvin repeatedly came back to was the generosity of the search community.

From the early days, practitioners shared what they were testing, what worked, what failed and what others should watch for.

That culture of learning helped define the industry.

It also shaped Marvin’s own career, both as a journalist at Search Engine Land and now in her role at Google.

AI is not as new as people think

Marvin believes one of the biggest misconceptions about AI in search is that it suddenly appeared.

Machine learning has been part of Google Ads for years, powering changes such as close variants, Smart Bidding and automation.

What changed recently was the speed of progress driven by large language models.

AI did not arrive overnight. But LLMs accelerated the shift dramatically.

Consumer behaviour is changing search

For Marvin, the biggest change is not just what Google can do.

It is how people search.

Queries are getting longer and more complex. People are searching through images, voice and multimodal inputs. Search can now understand intent without relying only on typed keywords.

That means advertisers need to think beyond the final conversion moment and understand the full customer journey.

Success still means business outcomes

Marvin does not think the definition of success in search has changed.

It still comes down to business outcomes.

What has changed is marketers’ ability to measure those outcomes and connect campaign activity to business goals.

That makes data, measurement and first-party signals more important than ever.

The next 20 years will reward curiosity

When asked what kind of marketer will succeed in the next phase of search, Marvin pointed to curiosity.

The best advertisers will be those who keep learning, watch how customers behave and adapt before they are forced to.

She compared it to mobile, where consumers moved faster than advertisers did.

The same thing is happening with AI.

PPC marketers say they love change — until it happens

Marvin’s reality check for the industry was simple.

PPC marketers often say they love change, but many resist every major shift when it arrives.

Her advice is to take a longer view.

Many of the changes that feel sudden have actually been building for years. Automation, AI, broader intent matching and full-funnel campaigns have all been moving in this direction for a long time.

Her advice: start experimenting

Marvin’s message is not that every new feature will work immediately.

It is that marketers should not write things off forever because they tested them once months or years ago.

Platforms evolve quickly. Capabilities improve. What failed before may work differently now.

For advertisers still holding tightly to old ways of working, the next phase of search will be harder.

What she is proudest of

Looking back, Marvin said she is proud of the search community itself.

Its willingness to share, learn and support each other has made the industry stronger.

She also sees her role, both at Search Engine Land and Google, as being a resource for marketers.

  • As she put it, communicating “by marketers, for marketers” has always mattered.

💾

Google Ads Liaison reflects on how search evolved from manual PPC to AI-driven systems — and why marketers must stay curious and adapt.

New data: 77% use AI to shop. Nearly 1 in 3 won’t let it spend.

27 April 2026 at 22:27
New data- 77% use AI to shop

Editor’s note: This research was conducted by Exploding Topics, the trend discovery platform owned by Semrush, and is republished here with permission. Data is drawn from a proprietary survey of 1,009 US consumers. Full methodology appears at the end of this article.

More than three in four consumers have used AI to help with shopping or purchasing decisions in the last six months, according to new research from Exploding Topics. 

AI tools like ChatGPT and Google Gemini have been absorbed into weekly shopping routines. The technology has rapidly become a staple of product research and price comparison, for everything from clothing to groceries.

But at the same time, we found significant and widespread discomfort about the next chapter in AI commerce. 

The very same people who are eagerly embracing AI to shop often draw the line at empowering AI to spend. “Skepticism” is the prevailing attitude about tools like ChatGPT’s short-lived Instant Checkout, while even something as simple as storing card details with an AI chatbot makes consumers uncomfortable.

Looking ahead, shoppers expect AI to become ever more prominent in their buying habits. But this research highlights some significant barriers that will need to be overcome before that can truly happen.


Download the summary of our findings.


Fast facts

  • 77.6% of consumers have used AI to shop in the past six months, with 43.21% using it weekly or more
  • Most shoppers are using AI for product research (68.5%) and finding the best price (55.19%)
  • Among those who use AI to shop, ChatGPT is the most popular tool (77.56%), followed by Google Gemini (58.21%)
  • 68.07% of consumers who have tried using AI to help with their shopping have increased their usage in the past six months
  • AI has directly influenced 68.64% of users to buy something they otherwise wouldn’t have purchased
  • “Skeptical” (41.08%) and “suspicious” (33.10%) are the leading attitudes toward AI tools that can complete orders on your behalf
  • More than half of all consumers would be at least somewhat uncomfortable with letting AI tools store their card details
  • The mode average that a consumer would trust AI to spend autonomously is $0, while the median would cap spend at $50
  • 55.83% of consumers expect AI to play a bigger role in how they shop in five years’ time

See this year’s biggest emerging trends.

Part 1: The AI Commerce Surge

AI is an increasingly ubiquitous shopping tool

To get a baseline, we first asked respondents how they would describe their use of AI generally. This yielded a striking response. 

Of the 1,000+ people we surveyed, almost half reported using AI frequently. But more eye-catchingly, only 9.81% had never used an AI tool.

High adoption was a recurring theme when we asked about AI commerce specifically. 43.21% of consumers are using AI to help with shopping at least once a week, with well over half of shoppers using the technology at least monthly.

Survey question- How often have you used AI to help with shopping or purchase decisions?

Among “frequent” AI users, only 2.76% had never used the technology to help with shopping or purchasing decisions. That figure rose to 22.4% overall, leaving over three-quarters of consumers who have at least tried using AI for shopping.

Even among the over-60s, more than half (52.78%) reported using AI for shopping in the last six months. 18.75% of them use AI shopping tools weekly or more.

We put the next set of questions exclusively to the 77.6% of respondents who have adopted AI commerce, in order to better understand how they are using AI to shop.

How are consumers using AI to shop?

“AI commerce” is a broad term that can capture a wide range of consumer activities. We wanted to find out exactly how people are incorporating AI into their purchases.

Product research emerged as the leading use case, adopted by more than two-thirds (68.5%) of shoppers. More than half (55.19%) also reported using AI for finding the best price/deals.

Survey question- Which shopping tasks have you used AI for?

Deciding between brands, getting gift ideas, and summarizing customer reviews were all also reasonably common (>35%).

Interestingly, shoppers also have clear preferences about what they will buy with the help of AI. Consumers are most likely to employ AI assistance when browsing for clothing or technology.

Survey question- What have you used AI to shop for?

This data really underlines how embedded AI has become in shopping routines already. 44.62% have used it for something as mundane as grocery shopping. 

More “exceptional” and potentially expensive purchases, where you might intuitively assume that consumers would turn to AI for a bit of extra assistance, tend to be less common use cases. Furniture (29.62%) and jewelry (28.08%) were among the least popular responses.

The AI shopping tools of choice

ChatGPT still enjoys the largest market share across most consumer AI functions, and shopping is no different. 77.56% of shoppers use ChatGPT when they want AI assistance.

The prevalence of Gemini is perhaps more surprising. 58.21% of shoppers reported using Google’s AI, well over twice as many as the next most popular tool.

Survey question- Which AI tools have you used for shopping purposes?

Of course, Google has embedded Gemini in AI Overviews and AI Mode. Its search engine has long been the go-to tool for manual product research, so perhaps it should be no great surprise that its AI has now captured a lot of the same market.

But intriguingly, Gemini usage is actually most saturated among people who use AI to write shopping lists. 75.86% of AI list-writers report using Google’s tool to shop (compared to 60.49% of those who use AI for product research).

That’s not to say Gemini is necessarily being used to write these shopping lists. But the expected skew toward Gemini among those who reported using AI for product research simply did not materialize, suggesting that shoppers may well have a genuine preference for Google’s AI beyond just its embedded search features.

Fewer than one in five people using Claude for AI commerce was also notable. Anthropic’s tool actually overtook ChatGPT in an enterprise context last year, but adoption for everyday consumer tasks is still lower than competitors.

Grok remains the most highly gendered tool. It’s used by 31.98% of male shoppers, but just 15.16% of women.

Survey question- Preferred AI shopping tools by gender

Across the board, men were more likely than women to use AI tools for shopping. However, ChatGPT usage was close to equal (78.05% of men vs 77.51% of women).

Evolving AI shopping habits

It is remarkable how quickly AI has embedded itself as a standard shopping companion. Among those who are now using the technology, 39.1% say they use AI for shopping “much more” than they did six months ago.

Survey question- How has your use of AI tools for shopping changed in the last six months?

A further 28.97% of consumers are using AI tools for shopping “a bit more” in the last half-year. Only 6.02% have decreased their usage.

Middle Atlantic residents stand out as the keenest adopters. Almost half (49.04%) are using AI for shopping much more in the past six months, and close to eight in 10 (78.98%) have at least somewhat increased their usage. West North Central is the least enamored with the technology, with over 13% using AI for shopping less frequently than they did previously.

Nationwide, the impact of the technology on purchasing habits is stark. 92.54% of consumers say it is at least possible AI has directly influenced them to buy something they wouldn’t have otherwise purchased.

Survey question- Has using AI ever directly influenced you to buy something you wouldn't have otherwise purchased?

Almost seven in 10 (68.64%) can definitely remember being directly influenced to make a purchase. That includes 36.89% who say they have been influenced “many times” by AI.

This trend is most pronounced among the highest earners. 61.9% of consumers with a household income of $125,000 or higher have made AI-influenced purchases “many times,” and only 13.19% cannot recall any such purchase.

Survey question- AI-influenced purchasing habits by household income

Why the increased uptake?

Although the speed of AI shopping adoption is startling, the reasons behind it are ultimately no mystery. Quite simply, the majority of people who have tried using AI tools have found that they make product research easier.

37.18% say that AI makes shopping research much easier. A further 40.9% say AI makes it somewhat easier. 

Survey question- Attitudes to whether AI has made shopping research easier

For the most part, consumers also trust AI as a shopping tool. 

Only around one in five shoppers say that they trust AI completely. But that rises above 60% when also counting those who mostly trust AI as a shopping tool, with some manual fact-checking.

Survey question- How would you describe your trust in AI as a shopping tool

In many ways, this is the expected pattern, given that the question was only put to people who have tried using AI as a shopping tool. Those with the least trust may not have tried it in the first place.

However, it’s quite a sharp contrast from another of our original surveys, assessing attitudes to AI Overviews. In that context, 82% of respondents were at least somewhat skeptical of the outputs, and yet the vast majority continued to rely on them anyway (without routinely checking sources) for the sake of convenience.

When it comes to shopping, users seem to have more genuine faith in AI outputs: They are using it not only for its convenience, but because it generally works well. That could be a sign of general AI improvements in the ~nine months between the surveys, or it may be a sign that commerce is an area where the technology can really excel for consumers.

The typical AI purchase pipeline

So most people are using AI commerce tools, and uptake has only gotten higher in the last six months. But interestingly, there is no clear consensus about how to use AI for shopping.

We know that product research and price comparison is popular. But that doesn’t tell us too much about what a typical AI-assisted purchasing journey actually looks like.

We gave respondents four options:

  • I use AI as a starting point and then consult other sources
  • I start on traditional retail websites and then use AI as a supplement
  • I use AI as my only source and then complete checkout externally
  • I complete the entire shopping process in AI, from initial research to checkout

There was an almost exactly even split between the first two options. 44.8% start on retail websites and then add in AI, while 44.03% use AI as a starting point before looking externally.

Survey question- Which of these best describes how you typically use AI to shop

This is notable for retailers, and underlines the paramount importance of Generative Engine Optimization (GEO). 

A huge base of potential customers are using AI as a starting point, so it is imperative that your brand gets organically mentioned. And for those starting on your website but then double-checking with AI, brand sentiment could make or break a sale.

The other thing that stands out from this data is that using AI for the entire shopping journey is still a fringe use case. Only 8.99% of users are using AI as their only source before purchasing, and only 2.18% are checking out via AI. 

In Part 2, we’ll examine the reasons why. Questions in the second part were put to all respondents, to get a better idea of the current attitudes held by both adopters and non-adopters of AI commerce. 


Spot the next “AI commerce” 12 months early. Exploding Topics Pro tracks 11M+ trends with search volume, growth curves, and category filters. Start your 7-day free trial


Part 2: The AI commerce red line

Instant Checkout: Don’t know it, don’t like it

Regardless of which stage in the shopping process users introduce artificial intelligence, the final step is nearly always external checkout. Given that consumers are clearly keen on using AI as part of the commerce journey, tools that eliminate this point of friction make superficial sense. 

That was the idea behind Instant Checkout from ChatGPT; you can do all of your research within the app, and then complete your purchase there as well. In effect, the AI agent completes the transaction on your behalf. 

OpenAI isn’t the only one to build something like this. Visa’s Intelligent Commerce is a similar payment-side solution, while Google has developed its own AP2 protocol for “agent-led payments.” 

But awareness of new and upcoming tools that allow you to checkout from directly within an AI interface is quite low. 42.83% of people were not at all aware, with a further 23.01% only “vaguely aware.” 

Survey question- Are you aware of newly-released and upcoming tools that allow you to check out from directly within an AI interface

Unsurprisingly, those who use AI for shopping weekly or more are most likely to be “very aware” of Instant Checkout and similar tools (63.3%). But that drops to 25.19% among monthly users, and just 11.11% among those who have used AI shopping tools “a few times.” 

Having been told about the existence of these tools, the response could best be described as mixed. 

From a preset list of options, “skeptical” was chosen most often (41.08%), followed by “suspicious” (33.1%). But respondents could pick more than one answer, and “excited” (31.61%), “happy” (24.33%), and “impressed” (24.03%) were the next most-common answers.

Mixed response on existence of AI shopping tools

Those who selected to fill in an answer of their own were overwhelmingly negative. Responses included “hunted/preyed upon,” “terrified,” “wary,” and “not interested.” 

Crucially, there was significant negativity toward Instant Checkout even among those who are already routinely using AI tools for shopping.

29.82% of the most regular AI shopping users said they were suspicious of tools like Instant Checkout, and 29.59% reported being skeptical. Among monthly users, skepticism was the single most popular attitude (37.04%). 

Meanwhile, only 2.22% of the people who aren’t currently using AI to shop reported being excited at the prospect of agents being able to carry out purchase orders. 

In fact, the idea of AI purchasing power is actively making non-users less likely to try AI for shopping. 

Survey question- Will tools like Instant Checkout make you more likely to use AI for shopping

44.89% of AI shopping non-adopters are “much less likely” to try the technology as a result of these new tools. Over half are at least a bit less likely, and only 7.11% are more likely. 

On the other hand, the most regular existing AI shoppers anticipate that tools like Instant Checkout will further increase their usage. 72.71% say that the innovations make them at least somewhat more likely to shop with AI more regularly.

Outside of power users and non-users, indifference is more common. 48.89% of monthly users anticipate Instant Checkout (and similar tools) will make no difference to their usage, as do 52.17% of occasional users.

And it seems OpenAI must have reached a similar conclusion. Mere months after launching Instant Checkout, it has rowed back on direct shopping features, doubling down on the discovery side of things.

Distrust of AI companies with payment data

One of the biggest hurdles when it comes to further integrating AI into commerce is that most people don’t feel comfortable trusting chatbots with their card details in order to make direct purchases easier in future.

In total, 51.45% of consumers are at least somewhat uncomfortable at the idea of AI tools storing their card details. Only around 1 in 4 are “very comfortable.” 

Survey question- Would you be more comfortable with AI tools storing your card details to make direct purchases easier in future

As well as being the most popular response overall, “very uncomfortable” also cut across age groups to an unexpected degree. More than a third of consumers aged 18-29 said they would be very uncomfortable storing card details with an AI tool, despite being digital payment natives.

Even among the most frequent AI shoppers, barely more than half (50.69%) said they would be “very comfortable” with AI tools storing their card details. That dropped dramatically to 18.52% among monthly AI shoppers, 7.25% among those who use the technology occasionally, and just 0.89% among those who don’t currently use AI to shop at all.

Pacific residents are most likely to trust AI tools with their card details, with 64.48% at least somewhat comfortable, while the Middle Atlantic once again stands out as a distinctly pro-AI region. New England is the most distrustful (58.53% at least somewhat uncomfortable).

Survey question- Comfort with AI storing card details

Who does AI commerce serve?

Tied in with this discomfort about payment details is the fact that consumers are skeptical of whether they are truly the intended beneficiaries of AI commerce technology.

Only 14.16% of respondents said consumers are the ones being primarily served by AI shopping tools right now.

Survey question- Who do you think AI shopping tools primarily serve right now

The most common answer (27.52%) was that these tools are made to serve the interests of AI companies themselves. Brands and advertisers (27.32%) was another popular response.

And even among the most frequent users of AI shopping tools, only 23.85% of consumers believe they are the ones whom the tools are primarily serving. These power users were more likely to say that brands and advertisers are the ones being served.

Survey question- Frequency of use compared to who users think AI shopping tools primarily serve

Among less frequent users, skepticism rises sharply, to the point where just 2.22% of non-users believe AI shopping tools are primarily serving consumers right now.

“The mode amount a consumer would authorize AI to spend autonomously is $0.”
Exploding Topics, 2026 consumer AI commerce survey

Hard spending cap for autonomous AI purchases

Given that some degree of skepticism cuts across multiple demographics, it isn’t too surprising to learn that consumers remain reluctant to empower AI to spend vast sums autonomously. 

However, the extent of the reluctance is eye-catching: the mode amount a consumer would authorize AI to spend autonomously is $0.

Survey question- How much consumers would trust AI to spend autonomously

Specifically, we asked how much consumers would trust AI to spend in the scenario where they were instructing an AI agent to buy something once it became available. This hypothetical aligns closely with the stated use cases of the latest AI commerce innovations, including Google’s AP2 Protocol:

Unlocking new commerce experiences

But right now, our survey shows the appetite is simply not there. 31.21% of consumers would not allow any autonomous AI spend at all, 17.45% would cap it at $20, and 20.74% would cap it at $50.

This immediately all but wipes out another of Google’s proposed use cases: the example of instructing an AI to buy concert tickets the moment they go on sale. Assuming most such transactions would exceed $100 total, only 11.71% of consumers would currently be comfortable trusting AI with the purchase.

AI companies even face a hard sell among their regular users. 51.84% of weekly AI shoppers would cap autonomous AI spend at $50 or less, as would 67.41% of monthly shoppers.

Survey question- Caps consumers would place on autonomous AI spending

Barely more than one in five (20.87%) of the most frequent AI shopping users would be prepared to authorize a spend over $100.

Unsurprisingly, the highest earners are the most likely to trust AI to make bigger purchases. But even then, 68.57% would cap agents at $100 or less: 1/2000th of their annual household income at most.

Agentic commerce is here to stay

The tension at the heart of these results is that despite this reluctance to sanction AI spend, there is widespread belief that AI’s role in commerce will continue to get bigger.

More than half of people (55.83%) think AI will play a bigger role in how they shop in five years’ time. Only 12.37% believe it will play a smaller role.

Survey question- In five years, do you expect AI to play a bigger or smaller role in how you shop

Even among non-users, almost a third (32.44%) predict that AI will play an at least somewhat bigger role in how they shop in five years’ time. And 74.77% of the most frequent AI shoppers believe the technology will take on an even bigger role in how they make purchases.

A future of expanded AI commerce would come with further questions. For instance, a landscape of ads and sponsored links has the potential to disrupt the quality of AI outputs.

However, most consumers seem satisfied that increased AI shopping features won’t actively impact the quality of responses. In fact, 48.35% believe the rollout of more shopping capabilities and the integration of ads will actually improve the overall standard of AI answers.

Survey question- Do you think that shopping features, along with integration of ads, will have an effect on the overall quality of AI answers

Only around one in 10 of the most frequent users predict that ads and shopping features will make AI outputs worse, a finding which AI companies could well interpret as something of a green light to push ahead with this kind of monetization.

The final sure sign that AI commerce will continue to grow is simply that shoppers like it. Even if there is some skepticism about whether consumers are truly the main beneficiaries, 55.83% agree that AI features make shopping at least somewhat better for consumers overall.

Survey question- Overall, do you think AI makes shopping better or worse for consumers

Among users and non-users alike, fewer than one in five people think AI has made the shopping experience any worse. That falls below one in 10 among those who have used the technology at all within the last six months.

Though there are disagreements about what direction this burgeoning technology should take next, it looks increasingly clear that AI shopping is here to stay.

If you’re a retailer, a tool like Semrush Enterprise AIO is more important than ever. 77% of your customers are using AI in their commerce journeys, and the visibility and reputation of your brand has the potential to transform your bottom line.

Request a live demo today.

Methodology

This survey was completed by 1,009 respondents in total. After the general questions about frequency of AI usage and AI shopping usage, non-users were skipped for the remainder of Part 1, before being reintroduced for Part 2 (where the opinions of non-users offered valuable insights).

All respondents were from the US, spanning all regions (there were no respondents from the US Territories). 56.03% of respondents were female, and 43.97% were male. 

10 different household income bands were represented, from $0-9,999 up to $200,000+. The median income range was $75,000-$99,999.

The age range was as follows: 

  • 14.76% aged 18-29
  • 43.47% aged 30-44
  • 27.42% aged 45-60
  • 14.36% aged over 60

Want the key stats in a one-pager? Download the full summary.


Pete Bowen talks about why Google Ads is not just about clicks

27 April 2026 at 22:42

On PPC Live The Podcast, I spoke with Peter Bowen, a Google Ads specialist with nearly 20 years of experience and a strong focus on B2B lead generation.

Pete shared two major lessons from his career: always check the basics, and never assume the systems around your ads are working just because the campaigns look fine.

The currency mistake that cost 10 times the budget

Pete Bowen shared an early mistake where a South African client’s account was set up in the UK, defaulting the currency to pounds instead of rand. That simple oversight led to spending roughly 10 times the intended budget, delivering great results at first — but ultimately setting unrealistic expectations and losing the client.

Why checklists protect PPC teams

The takeaway from that mistake was to formalise learning into process. Adding something as simple as a currency check to a setup checklist ensures that once a mistake is made, it doesn’t happen again — turning painful lessons into repeatable safeguards.

The bigger problem: system decay

Beyond setup errors, Pete highlighted a more subtle but common issue he calls “system decay” — where the infrastructure connecting ads, tracking tools, CRMs and sales processes gradually breaks down without anyone noticing.

Why conversion data failures hurt performance

When conversion data stops flowing properly, Google’s algorithms lose the feedback they rely on to optimise. This can lead to reduced spend, poor performance or campaigns that suddenly stop delivering — even if nothing appears wrong inside the platform.

PPC managers need to look beyond the interface

One of the biggest mistakes advertisers make is focusing only on what happens inside Google Ads. Strong performance depends on the entire journey, from click to conversion to revenue, and any break in that chain can undermine results.

What to do when conversion tracking breaks

When tracking fails, the priority is to fix the root issue quickly and, where possible, use data exclusions to prevent bad data from influencing optimisation. Longer term, building monitoring systems that flag issues early is essential to avoid repeat problems.

The danger of optimising for clicks

Pete also pointed to a common but damaging mistake: optimising campaigns for clicks rather than outcomes. Without proper conversion tracking, advertisers can end up driving large volumes of traffic that never turn into leads or sales.

Why Performance Max needs strong tracking

Automation like Performance Max can amplify this issue, as it will follow whatever signals it receives. Without accurate conversion data, it can scale irrelevant traffic quickly, making strong tracking a prerequisite before leaning into automation.

Why bid strategies need guardrails

Google’s bidding systems are powerful but literal — they optimise toward whatever you define as success. That means advertisers need clear goals, reliable data and sensible guardrails, such as CPC limits, to avoid extreme or inefficient outcomes.

Testing AI features carefully

With newer tools like AI Max, the risk isn’t testing too early — it’s testing without a clear definition of success. Metrics like impressions and clicks are not enough; advertisers need to measure impact on qualified leads, sales and revenue.

The problem with “always be testing”

Peter also challenged the idea that everything should be constantly tested. Many accounts simply don’t have enough data to make small tests meaningful, meaning time is often better spent improving fundamentals rather than chasing marginal gains.

The key takeaway

The overarching lesson is straightforward: mistakes are part of the process, but only if they lead to better systems. Every error should result in a checklist, a monitoring process or a safeguard — ensuring it doesn’t happen again.

💾

A simple currency error and a hidden breakdown in conversion tracking show how small, overlooked mistakes can spiral into wasted spend.
Before yesterdaySearch Engine Land

Adthena launches Google Ads-to-ChatGPT conversion tool

27 April 2026 at 20:24
Why OpenAI paused ChatGPT ads to fight Google’s Gemini

As ad dollars begin shifting toward ChatGPT, ad tech firms have started working to make that transition as seamless as possible.

What’s happening. Adthena launched a new tool, AdBridge, designed to convert existing Google Ads campaigns into formats ready for ChatGPT advertising. The pitch is simple: don’t rebuild from scratch — repurpose what already works.

The tool analyzes advertisers’ search campaigns to generate keyword lists, negative keywords, and competitive insights that can be directly applied to ChatGPT campaigns. It also surfaces which brands are showing up in specific auctions, how often they appear, and which prompts are triggering those placements — giving marketers more than just a copy-paste approach.

Why we care. Adthena’s Adbridge makes it much easier to shift budget from Google Ads into ChatGPT without rebuilding campaigns from scratch. By repurposing existing keywords, learnings, and competitive insights, brands can test and scale ChatGPT ads faster with less risk. As the platform opens up and inventory grows, tools like this lower the barrier to entry and could accelerate how quickly ChatGPT becomes a serious performance channel.

As Adthena CMO Ashley Fletcher put it, the goal is to get campaigns “ready so they can go straight in,” mirroring the CSV-based workflows advertisers already use across major platforms.

Early testing. The company already held multiple sessions with large enterprise brands testing the tool, signaling early demand from advertisers looking to scale activity in ChatGPT’s still-limited ad ecosystem.

Between the lines. This isn’t just about convenience — it’s about momentum. Advertisers experimenting with ChatGPT ads have faced constraints like low inventory and limited scale. By making it easier to deploy campaigns quickly, Adthena is positioning itself to accelerate adoption as those constraints ease.

Zoom in. AdBridge is part of a broader push from Adthena, including Arlo, an AI assistant that allows advertisers to query performance data and compare results between ChatGPT and search campaigns. Together, they point to a future where managing AI-driven ad channels looks increasingly similar to existing search workflows.

The backdrop. OpenAI has been rapidly evolving its ads offering — quietly rolling out an ads manager, lowering minimum spend thresholds, and introducing more flexible pricing models. Partnerships with firms like Criteo and Smartly signal a growing ecosystem.

Bottom line. If ChatGPT ads are going to compete for search budgets, the winners may be the tools that make switching feel effortless — and Adthena wants to be first in line.

Bing Webmaster Tools teases new AI reporting updates

27 April 2026 at 18:18

Microsoft teased new AI reporting features within Bing Webmaster Tools that enhance the AI performance reports and other reports around AI. The new features that were showcased include citation share, grounding query intent, GEO-focused recommendations.

More details. Several shared screenshots of this presentation that was given by Krishna Madhavan from Microsoft at SEO Week today in New York City. Here are some of those slides:

Bing Webmaster Tools just dropped some VERY COOL stuff at #SEOWeek 2026

Citation Share, Grounding Query Intent (15 pre-defined intents), and GEO-focused recommendations.

The gap between Bing's transparency and Google's is getting harder to ignore.

Cc @rustybrick @glenngabe pic.twitter.com/kOMhVyQvpQ— Azeem Ahmad (@AzeemDigital) April 27, 2026

Bing webmaster tools owning SEO & GEO @kmadhavan77

Citarion share
Intent
Topics
Geo recomendaciones

Exclusive for #seoweek pic.twitter.com/H2arlFtS8R— MJ Cachón (@mjcachon) April 27, 2026

Not live yet. These new features and reporting do not seem live yet but Microsoft still showed them off.

Why we care. More transparency into how your content is performing within the AI search results is useful. So we all welcome additional reporting from Bing Webmaster Tools.

It is not clear exactly how these reports will work and when they may be live for you and me, but you can read those posts for more details.

7 lessons from moving from agency to in-house SEO

27 April 2026 at 18:00
7 lessons from moving from agency to in-house SEO

If you’re reading this, you’re likely an SEO aficionado like me. I’m a seasoned SEO with 10+ years of agency experience.

Being on the agency side gave me deep SEO expertise, exposure to top industry talent, and experience working with some of the world’s most well-known brands.

I did a bit of everything on the agency side — from technical SEO to content marketing to new business.

Working at an agency is nothing like working in-house. After a long run on the agency side, I moved in-house for the first time. Here are seven things I’ve learned since making the switch.

1. Owning performance changes how SEO is evaluated

On the agency side, when performance drops, you know the drill: a frantic message hits your inbox — traffic is down — and the client needs a report on what’s happening by yesterday.

You then spend the next few hours in the SEO trenches analyzing search trends, tracking ranking changes, and digging through Google Search Console to find your answers. You cross your T’s. Dot your I’s. You beautify that report a bit. And — finally — you fire it off to your client. 

After sending the report, you may get a few questions from the client. A little back and forth, but for the most part, your job is done. The fire drill is over. You’ve done everything you can from the agency perspective. On to the next client on your roster. 

This situation looks a lot different on the in-house side. 

From my new perspective, receiving that agency report is just the beginning. Now, I’m the one on the hook for translating that analysis, figuring out how to socialize it, and turning it into a concrete action plan to turn performance around.

I always knew my clients were under a lot of stress. I figured their bosses were the ones catching the dips and asking difficult questions, leading to that inevitable frantic message in my inbox. But, boy, it hits differently when you’re the one getting asked those difficult questions.

When you’re in-house, you aren’t just reporting on a dip in performance — it feels like you’re defending your entire SEO strategy. The way you frame that data can make or break the projects or the direction you’re taking the program.

It’s a lot of pressure — and it’s different when you’re responsible for the results.

2. Execution matters more than deliverables

On the agency side, the deliverable is the destination. You spend hours researching, analyzing, and refining a beautiful slide deck. Each slide flows, tells a story, and looks pristine. I mastered this — and did it fast.

Now that I’m in-house, I’ve realized the deliverable isn’t the destination anymore.

It’s all about the execution. 

I was lucky enough during my agency days to have one engagement where I was deeply embedded in day-to-day operations. I was doing things like building dev tickets, reviewing Figma designs, and actually pushing CMS updates. I thought I knew exactly what execution looked like.

But executing while in-house is way more challenging than I expected. 

In order to execute on an SEO strategy, you have to work through the entire org to bring your vision to life. You need to coordinate with the design team to review Figma designs. You need to align messaging and copy with PMMs. You need to work with project managers to make sure deadlines are being met. You need to work with devs to make sure the technical implementation is correct.

It’s not easy. Sometimes it’s messy. And — quite often — it’s pretty frustrating. 

But here’s the truth: once you move from polished decks to pushing changes live, you become 10x the SEO you were before.

Dig deeper: Why branding matters for in-house SEO teams

3. The shift from agency partner to internal stakeholder

One of the more interesting parts of making the switch to in house, was that suddenly, I became the client. I’m the one on the other end of the video call. I’m the one receiving the strategy docs. I’m the one calling all the shots. 

And honestly? It’s been a huge (and super exciting) opportunity to take everything that I’ve learned on the agency side and put it into action. 

And I’ve gotten to decide what type of client I want to be. 

I had a wide range of clients on the agency side. Some disappeared. Some were demanding and made every call tense. Some pushed impossible deadlines. Some didn’t trust my judgment. Some couldn’t execute the strategy.

You name it — I’ve probably experienced that type of challenging client.

Then I had dream clients — kind, collaborative, and treated me like an equal. Calls felt like catching up with a friend before getting into SEO. They could take a strategy and execute without being demanding or difficult.

That was the client I wanted to be. And that’s the client I strive to be, too. 

4. Storytelling matters more than strategy 

I’m a technical SEO at heart.

Nothing makes me happier than seeing the indexing rate improve after an XML sitemap refresh. Or seeing a massive improvement to Largest Contentful Paint after implementing Core Web Vitals optimizations. Or even a perfectly executed hreflang optimization to target your key international markets. 

Chef’s kiss — it warms my technical SEO heart to see all this work get executed. 

The problem? Your execs don’t understand that technical jargon.

That’s where storytelling becomes your best friend. And I’d say it’s almost as important as the execution itself. 

Because it doesn’t matter if you do all this SEO work if your bosses can’t understand it. You need to tell a story about what you did, why you did it, and the results. All in a simple, easy-to-understand format — ideally with a pretty visual right next to it.

Let’s take, for example, hreflang optimizations. You realize that hreflang is important. But how do you make it seem important for an exec so that they can understand it?

What I do is pretty simple. I explain the background behind why I’m doing what I’m doing and frame it in simple terms. 

Instead of saying that we updated hreflang to target France correctly, I would frame it as improving the search experience for France searchers. I’d then show a SERP screenshot of before the optimizations to show incorrect targeting, and follow it up with an updated screenshot with correct targeting. Lastly, I’d share results — ideally, an increase in CTR, traffic, or conversions. 

(Side note: If you’re one of my agency partners reading this, you know I ask for an insane amount of screenshots — but this is exactly why I do it.)

Following this formula allows you to:

  • Explain why we implemented the optimization (in this case, incorrect targeting in France).
  • Show what users are seeing in the market.
  • Demonstrate that this optimization achieved business results.

It’s a simple blueprint that makes it easy for execs to understand the importance of your optimizations. I know it may seem small, but storytelling is one of the secrets to success in in-house life. 

Dig deeper: How to use the three-act structure for data storytelling

Get the newsletter search marketers rely on.


5. SEO depends on cross-functional collaboration

In a massive organization, it’s so easy to live on an SEO island. If you’re not collaborating, you can easily find yourself on a beach hanging out with a volleyball named Wilson — just optimizing <title> tags, writing meta descriptions, and optimizing on-page copy for keywords. 

But there’s absolutely no way you’re going to get anything meaningful done without the support and assistance from others within your organization. 

You need to be a team player. And cross-functional collaboration is important for success. 

After years on the agency side, I learned to move fast — really fast. When I went in-house, I tried to keep that pace. I wanted to make changes, test, and see results immediately. I saw documentation as a hurdle, and large cross-functional meetings without progress as a waste of time.

Quickly, I found out that’s not the case. You need the support of those partners in cross-functional meetings to get things done. 

It takes time to get to know your cross-functional teams and understand what they’re good at, what their goals are, and — crucially — where they need support. I’ve learned that once you understand the developer’s sprint capacity or a product marketing manager’s roadmap, you can stop just requesting things from them and start partnering with them to get things done. 

When you align your SEO goals with their existing priorities, you stop being a line item in their backlog and start becoming a teammate. In-house, having a teammate in engineering or product is the difference between a strategy that sits in a slide deck and one that actually ships.

6. Taking initiative and trusting your judgment 

OK, fine, I added a cliché to the list. But in the in-house world, it might be the most important one. 

I’ve been given this advice several times throughout my career. If you want to get something done, go get it done. Don’t wait around for permission from your bosses to do something that will have a significant impact. If you wait for permission, you may never get anything done. 

That’s why I ask for forgiveness — not permission. 

When I started in-house, I knew the team was lean. I knew my bosses had a million things on their plates. And, most importantly, I knew they hired me for a reason: to drive organic growth.

During my first few weeks, I remember asking myself, “Can I launch this content?” “Can I expand into this market?” “Am I allowed to test this tactic?” 

And then it hit me: This is exactly why I’m here. They hired me to make these decisions and move the needle, not to add more approval meetings to their calendars.

And if I asked for permission for everything, I would never be able to get anything done. 

This is why I trust my instincts when it comes to SEO strategy and execution. I rely on my 10+ years of experience in the SEO game. If I think something is going to drive growth for the business, I don’t just sit around and wait for permission to do something. I execute. 

And if something doesn’t turn out exactly how I had planned? That’s when I take the forgiveness route. 

Dig deeper: 5 lessons from delivering bad SEO news to executives

7. Seeing SEO work translate into business impact 

I did a lot of high-impact, business-changing work during my agency life. I’ve built the strategies, seen them come to life on a site, and watched them drive results. Driving results and building case studies have always been my favorite part of the job.

However, when you’re sitting agency-side, you’re often the silent partner in those results, not the owner. 

Now that I’m in-house, I get to see my projects come to life on the site — and it’s pretty cool.

During my first few months in-house, I knew I wanted to make an impact quickly. I implemented a few of my high-impact, low-effort optimizations — the ones I would typically implement for a new client I had just onboarded. 

After reviewing monthly reports, I saw an insane spike in performance that lined up exactly with a significant site update we implemented. 

I remember thinking, “Wait, was that us?” 

The answer: It sure was. 

I then created my first case study and shared the results throughout our organization. And, shockingly (to me, anyway), people were really interested. Within my first three months, I found myself sharing those results at our entire company’s all-hands meeting — something I never expected to happen. 

I used to think a massive organization wouldn’t be interested in SEO, but I was wrong. When it comes to moving the needle for the business, everyone cares.

So, yeah, it’s always fun to get SEO results. But it’s a lot cooler when you’re in-house. 

Is making the switch worth it? That’s for you to decide

Making the switch from agency to in-house life has been a lot of adjectives for me. Exhausting, challenging, and exciting are some of the first that come to mind. 

But the biggest takeaway after one year in-house? I’ve learned a lot.

I hope you can take these seven lessons and apply them to your own journey — whether you’re at an agency or leading an in-house team right now. 

The transition isn’t always easy, but for me, seeing the strategy finally turn into reality has made every cross-functional meeting and performance fire drill worth it.

What are you optimizing for in paid search when keywords matter less?

27 April 2026 at 17:00
What are you optimizing for in paid search when keywords matter less

Paid search platforms are getting better at deciding who should see your ads, often without relying on the keywords you choose. 

As that shift accelerates, optimization is moving away from query-level control and toward signals like audience data, landing page context, and conversion behavior. Understanding that change is key to knowing what to actually optimize for now.

When keywords gave us control and what comes next

A decade ago, our world was defined by the illusion of control. Every decision we made was anchored in the keyword. Hypersegmentation and single keyword ad groups (SKAGs) ruled the land.

If possible, we’d build a unique landing page for every single keyword in every single ad group. The process was tedious, manual, and we loved it because we felt like we were the ones driving the machine.

Fortunately (or unfortunately, depending on how much you miss spreadsheets and Editor), times have changed. We’ve long speculated about whether Google and Microsoft would finally sunset keywords altogether. That day feels closer than ever.

From Performance Max to the emerging AI Max solutions — and even the shift toward contextual, LLM-driven search like ChatGPT — the industry is moving toward a keywordless reality.

But if we take a step back, we have to admit why the keyword is so vital. It’s a window into clear intent that tells us exactly where a user is in their journey:

  • The symptom: “Productivity tools for remote teams.”
  • The consideration: “Asana vs. Trello comparison.”
  • The decision: “Monday demo.”

If those signals are now handled behind the scenes by a black box, the role of the marketer changes. So what are we actually optimizing for?

Dig deeper: Beyond keywords: Mastering AI-driven campaigns

Signals are the new keywords

Intent is inferred from a complex web of signals that have rendered the individual keyword secondary. To win in 2026, your optimization focus must shift toward three core pillars.

Audience data (the ‘who’ over the ‘what’)

Google’s algorithms now prioritize customer match and first-party data over the query itself. With the full integration of the Data Manager API, the system knows which users in the auction match your closed-won deals.

You no longer bid on the query “cloud security.” You bid on the director of IT (because you’re sharing first-party data) who has a history of researching SOC 2 compliance, even if their current search is as vague as “scaling infrastructure.”

B2B match rates are notoriously stubborn. But this is exactly where you need to evolve your strategy. Move beyond one-to-one list matching and get creative with integration partners to enrich your signals.

Start by clustering individuals by shared pain points, then use on-site experiences to allow them to self-identify. By the time they hit a remarketing list, you aren’t just targeting a “user,” you’re targeting a verified intent state.

Get the newsletter search marketers rely on.


Landing pages as living signals

Your landing page is a data source. Google’s AI scans your page to understand the nuance of your offering. Creative assets are also important signals and need to complement your targeted themes and keywords, plus your landing page content.

If your landing page clearly articulates a “mid-market manufacturing” use case, the AI will automatically find those users, even if they never type the word “manufacturing.” Your “keyword strategy” is now your content strategy.

You might think looking at Meta is a deviation here, but the parallels are impossible to ignore. Meta’s Andromeda retrieval engine now influences a massive portion of the social auction by using the creative itself as the primary targeting signal. 

If both platforms are moving toward a world where your assets (whether it’s a 15-second video or a high-value landing page) are what actually define your audience, you have to ask: How much weight are you giving your creative inputs versus your technical ones? 

Historical conversions and pipeline velocity

With journey aware bidding and value-based bidding, the algorithm isn’t just looking for the final click. It’s analyzing the historical sequence of a user’s journey.

Optimization now happens against “high-value need states.” You’re feeding the system data on which mid-funnel behaviors (like a whitepaper download or a webinar sign-up) actually lead to six-figure contracts.

Dig deeper: Why better signals drive paid search performance

The great intent shift: Query-level vs. user-level

The most significant mental hurdle for digital marketers is the shift from query-level intent to user-level intent.

FeatureQuery-level intent (legacy)User-level intent (2026 and beyond)
Primary driverThe specific words typed.The user’s historical behavior and context.
Logic“They are in state X, so they need Y.”Triggered by a predicted “need state.”
MeasurementCTR and CPC.Pipeline value and predicted LTV.
Auction entryTriggered by a keyword match.Triggered by a predicted “need state”

In the old model, a query like “how to manage payroll” might have been ignored by an enterprise SaaS company as “too informational.” In 2026, the AI knows if that user is a student or a VP of finance at a 5,000-employee firm.

If it’s the latter, the user-level intent is commercial, regardless of the query-level phrasing, assuming you’re providing the right signals (see what I did there?). If you’re advertising on Microsoft Ads, you can leverage LinkedIn’s profile targeting.

What should you actually be doing?

Now that AI is handling the matching, your job has evolved from a mechanic to a data architect.

  • Feed the beast with better data: Your competitive advantage is the quality of your CRM integration. If you feed the AI junk leads, it will efficiently find you more junk. You must optimize for value-based bidding.
  • Audit your signal health: Are your landing pages optimized for AI readability? Do they have the technical schema and depth of content that allows Google to categorize your “intent bucket” correctly?
  • Embrace the black box with guardrails: Move away from micromanaging search terms, and start managing brand exclusion lists and negative intent themes.

The future of search isn’t about finding the right words. It’s about being the best answer for the right person at the exact moment their need state evolves.

Keywords were the training wheels. Now, the wheels are off. It’s time to see how fast your data can take you.

Dig deeper: Why PPC teams are becoming data teams

Cultural SEO: A practical framework for Spanish markets in AI search

27 April 2026 at 16:00
Cultural SEO- A practical framework for Spanish markets in AI search

AI systems are getting better at generating Spanish. They’re not getting better at understanding Spanish markets.

What we’re seeing instead is a consistent pattern: more than 20 Spanish-speaking countries collapsed into a single default. Spain becomes “standard.” Mexico becomes interchangeable. The rest get flattened into statistical averages.

The failure modes are structural — dialect defaulting, format contamination, and regulatory hallucination — and they’re amplified in a generative search environment where one synthesized answer replaces 10 blue links.

That distinction is now a visibility constraint. Generative systems resolve ambiguity. When your content doesn’t make its market context explicit, the system defaults to the statistical average — and that’s where otherwise solid content gets misapplied or ignored.

Below is a framework for fixing that problem. It’s designed to make market context explicit — across content, technical signals, and retrieval systems — so AI doesn’t have to guess.

What is cultural SEO?

Cultural SEO goes beyond hreflang and localization. The technical foundation is locale precision — controlling market context across retrieval and generation so an AI system treats your Spanish content as belonging to a specific country, not to “Spanish speakers” in the abstract.

Here’s the framework that works when you operate across Spain and Latin America.

The Cultural SEO Framework" — four-pillar diagram showing Market Segmentation → Transcreation → Retrieval Constraints → Entity Reinforcement

But there’s a prerequisite no framework can substitute for: you can’t optimize for a market you don’t serve.

Cultural SEO isn’t a localization layer you bolt onto a website. It’s the technical expression of a business decision to operate in a market — with real logistics, real customer support, real legal compliance, and real product-market fit.

If you ship from Spain to Mexico with a three-week delivery, process returns in euros, and have no local support channel, a perfect hreflang setup won’t save you. The model might surface your content, but the user will bounce — and the next time the model learns from that signal, you’ll be deprioritized.

Internationalization means speaking the market’s language in every sense: visual trust cues, payment methods, delivery expectations, regulatory compliance, and customer experience.

The four pillars below assume you’ve made that commitment. If you haven’t, start there. Everything else is decoration.

Pillar 1: Market segmentation at the entity level

Most international SEO teams think of segmentation as a folder structure: /es-es/, /es-mx/, /es-ar/, but that’s not enough.

In generative search, the question is whether the system recognizes that page as belonging to Mexico — and whether it has enough market-specific signals to prefer it over a generic alternative. If your architecture collapses variants, your visibility collapses with it.

Implement granular hreflang and URL structures

Don’t just use es. Use es-ES for Spain, es-MX for Mexico, es-AR for Argentina, es-CO for Colombia, and es-CL for Chile. Include x-default for users who don’t match any specific locale. Consider ccTLD strategies (.es, .mx, .com.ar) where they make business sense.

ccTLDs remain one of the strongest explicit geographic signals on the open web, and they reduce ambiguity for both search engines and downstream retrieval systems. Google’s documentation on localized pages supports this specificity.

But here’s the caveat. In the first article, I discussed Motoko Hunt‘s concept of geo-legibility and the phenomenon of geo-drift — AI systems misidentifying geography because language alone doesn’t resolve market context. 

Simply put, if your Spanish content doesn’t carry explicit country-level signals beyond hreflang, the model has to guess. Guessing, at scale, means defaulting. 

Ultimately, hreflang helps with traditional routing, but in AI synthesis, it’s one signal among many — and not necessarily the decisive one. 

When a generative system assembles an answer, it weighs semantic relevance, authority, and content-level cues alongside metadata. 

If your Spanish content relies on hreflang alone to declare “this is for Mexico,” you’re betting on a single signal in a multi-signal environment. Geographic markers need to live in the content itself and in structured data — not only in HTTP headers.

Dig deeper: How AI search defines market relevance beyond hreflang

Don’t canonicalize all locales to a single master URL

When you point es-MX, es-AR, and es-CO pages to one canonical es URL, you’re telling engines there’s only one “real” version — the exact Global Spanish assumption you’re trying to avoid. Each market page should canonicalize to itself.

Avoid IP-based redirects

Google cautions against this. Crawlers may not see all variants. More importantly, AI crawlers don’t carry IP signals the way users do. Offer a visible region selector and let users choose.

Encode market cues in structured data

This is essentially what Hunt calls geo-legibility — encoding geography, compliance, and market boundaries in ways machines can parse:

  • Use priceCurrency with ISO 4217 codes (EUR, MXN, ARS, COP, and CLP).
  • Use PostalAddress with explicit addressCountry.
  • Add areaServed to declare which markets you serve — the machine-readable equivalent of saying “we operate here, not everywhere Spanish is spoken.”
  • Use sameAs to connect to region-specific knowledge graphs (e.g., link your Mexican entity to Mexican directories and chambers of commerce, not just your global Wikipedia page).

A practical example: if your Mexico page shows prices in MXN, but your structured data still says EUR because it was copied from the Spain template, the model sees a conflict. Conflicts breed uncertainty. Uncertainty breeds generic answers. Generic answers are where Global Spanish lives.

A note on es-419: It can be useful as a catch-all for Latin American Spanish where market-specific pages don’t exist, but it should never substitute for es-MX, es-AR, or es-CO when the content involves legal, financial, or compliance information. Generic means vulnerable.

If your market pages aren’t self-evident to machines, the system will resolve ambiguity for you — and defaults win.

Pillar 2: Transcreation, not translation

Translation converts words. Transcreation converts meaning. The distinction matters because translated templates are easy for models to deduplicate — and deduplication is where localized pages go to die.

If two regional pages are 95% identical, the model will treat them as one. The “default” will win. Localized pages need substantive differences that prove market specificity, including:

  • Local examples and FAQs: A FAQ about tax deductions should reference SAT in Mexico, AEAT in Spain, and AFIP in Argentina — not all three in a dropdown.
  • Local legal references: Privacy content should cite GDPR + LOPDGDD for Spain, and LFPDPPP for Mexico, not a generic “applicable data protection laws.”
  • Native terminology: Zapatillas vs.tenis, ordenador vs.computadora, and cesta vs.carrito. These aren’t synonyms. They’re market identifiers that signal “this content was made here.”
  • Local pricing and formatting: Not just the currency symbol — the entire numeric convention. Spain uses 1.234,56 € while Mexico uses $1,234.56. Get it wrong, and the content reads as imported.
  • Local proof: Testimonials, case studies, partnerships, and press coverage from the target region. Not imported. When a model evaluates whether your content is authoritative for Mexico, it looks for Mexican corroboration.

The classic example: McDonald’s “I’m lovin’ it” became “Me encanta” — not a literal translation, but an emotionally equivalent expression. Apple’s iPod Shuffle tagline, “Small talk,” became “Mira quién habla” for Latin American Spanish.

These brands understood that meaning doesn’t translate. It must be rebuilt.

Start with keyword research 

Identify which Spanish-speaking markets have the most search volume and business potential for your verticals. Volume alone isn’t enough. Consider market maturity, competitive landscape, and conversion potential. Then bring in native speakers from those specific countries. 

This doesn’t mean rigid dialect policing. Context matters — a premium brand in Mexico City might use deliberately for intimacy. The test is whether those choices are strategic or inherited from the training data’s statistical average.

What ‘substantive difference’ looks like in practice

Take a returns policy page. Spain (/es-es/devoluciones/) and Mexico (/es-mx/devoluciones/) shouldn’t differ only in currency symbols. At least one section needs to be genuinely market-specific:

  • Spain: Consumer rights framing under EU regulation, SEUR or Correos as default carrier, Bizum as a familiar local payment entity, and vosotros register.
  • Mexico: PROFECO consumer authority framing, local paqueterías as shipping context, OXXO as a familiar local payment context (where relevant), and ustedes register.
  • Both: Distinct FAQs written in the market’s register, addressing questions that actual customers in that country ask.

If the pages are 95% identical after these changes, they’re not differentiated enough. The model will still collapse them. 

The feedback loop makes it worse: when a Mexican user lands on “españolized” content and bounces, that rejection signal teaches the model not to retrieve that page for Mexico next time. Poor transcreation doesn’t just lose one visit. It trains the system against you.

Pillar 3: Retrieval constraints (locale-locked sourcing)

This pillar addresses a layer that most traditional SEO doesn’t touch — and it’s where a lot of the Global Spanish problem actually lives.

If you’re building RAG-powered experiences (chatbots, AI assistants, and AI-enhanced customer support) or optimizing content for AI discovery, the question is: What content is eligible to be retrieved and synthesized for a given market?

Without explicit constraints, the model pulls from its statistical average — which, in this case, is “Global Spanish.” The fix requires intervention at the retrieval layer:

  • Filter sources by locale metadata before generation begins: Don’t let a Mexican user’s query pull from your Spain knowledge base unless you’ve explicitly marked that content as applicable to Mexico.
  • Prefer user-declared markets over inferred signals: If a user selects “Mexico” in your interface, that should be a hard constraint, not a suggestion.
  • Use hard constraints in system prompts: “Spanish (Mexico), MXN, SAT, Mexican legal context” — not just “Spanish.” The more specific your retrieval parameters, the less room the model has to improvise.

Think of it as the AI equivalent of telling your customer service team: “If a caller is from Mexico, use the Mexico playbook. Don’t improvise.”

This matters beyond your own properties. Up to 43% of fan-out background searches ran in English even for non-English prompts, Peec AI’s analysis found. This is a structural disadvantage for brands whose authority signals exist only in local-language corpora. 

Spanish sessions may still trigger English sub-searches, which changes which sources are eligible for retrieval. If the model’s own retrieval is biased toward English sources, your Spanish content needs to be unambiguously market-specific to compete for selection.

Pillar 4: Market authority through entity reinforcement

LLMs learn from your site and what the web says about you.

This isn’t traditional link building. It’s regional corroboration — building the external signal layer that tells a model where your brand operates and who considers you authoritative:

  • Local media mentions: A feature in top-tier national business press in your target market carries different geographic weight than a mention in a U.S. or U.K. publication. The model infers where you’re relevant from who talks about you.
  • Local industry citations: Partnerships with local chambers of commerce, industry associations, and regulatory bodies.
  • Region-specific knowledge graph reinforcement: Your Google Business Profile, local directory listings, and Wikipedia presence should all consistently reflect which markets you serve.
  • Local backlink ecosystem: Links from .mx, .es, and .ar domains reinforce geographic authority in ways that generic .com links don’t.

This is how you stop being a Spanish brand and become a Mexican authority — or both, explicitly. The key is intentionality: If you serve both markets, the model needs to see distinct authority signals for each, not a single blended profile.

Get the newsletter search marketers rely on.


What to ship (per pillar)

If you need to brief a cross-functional team — dev, content, PR — here’s what each pillar produces as a deliverable:

PillarDeliverable
1. SegmentationLocale URL map + hreflang/canonical rules + indexable alternates checklist
2. TranscreationPer-market glossary + “substantive difference” content brief template
3. Retrieval constraintsLocale filters + prompt contract (market, currency, jurisdiction)
4. Entity reinforcementQuarterly PR/citation target list per market + entity consistency audit

Pillar deliverables — what each pillar produces as a briefable output for cross-functional teams.

These are the artifacts that make the framework auditable and repeatable across teams.

Measuring cultural mismatch: an error taxonomy

You can’t improve what you don’t measure. Here’s a practical error taxonomy for auditing AI-generated content across Hispanic markets:

Error classWhat to look forSEO/UX impact
Dialect markersWrong pronouns, missing voseo, region-inappropriate vocabularyTrust erosion, higher bounce rates
Format errorsWrong currency, decimal separator mismatch, incorrect date formatsConversion risk, especially in e-commerce and finance
Legal/regulatoryWrong authority cited, incorrect compliance steps, mixed frameworksE-E-A-T damage, potential liability
SERP intentWrong product categories, wrong local entities, incorrect eligibilityClick-through and engagement drops
Brand voiceFormality mismatch (too formal in Mexico, too casual in Colombia)Brand perception damage
Retrieval contaminationFacts or citations sourced from a different locale than the target userErrors propagated into AI summaries

Cultural Mismatch Error Taxonomy — six error classes for auditing AI-generated content across Hispanic markets.

If you want a quick QA starting point, check three things first: the currency symbol, the regulator name, and the second-person register. Those three alone will catch most critical mismatches.

The regional signal table

For teams working across multiple Hispanic markets, these are the signals that most commonly trigger cultural mismatch in AI outputs:

SignalSpain (es-ES)Mexico (es-MX)Argentina (es-AR)Colombia (es-CO)Chile (es-CL)
Second-personVosotros/ustedesUstedes; Vos/ustedesTú/usted variesTú/ustedes; local slang
CurrencyEUR (€)MXN ($)ARS ($)COP ($)CLP ($)
Decimal separatorComma (1.234,56)Period (1,234.56)VariesVariesVaries
Hreflanges-ESes-MX / es-419es-ARes-COes-CL
Privacy frameworkGDPR + LOPDGDDFederal law (2025 changes)Habeas DataNational data protectionUpdated legislation
Fiscal/commercial IDNIF / CIFRFCCUIT / CUILNITRUT
Typical LLM default riskGrammar as “standard,” vocab ignoredVocab as “standard,” context flattenedVoseo erased or flaggedUstedeo misidentifiedLocal markers missed

Regional Signal Comparison — key locale markers across five major Hispanic markets. Note: number formatting can vary by platform; the key is internal consistency within a market experience. Regulatory details evolve; the point is to prevent wrong-jurisdiction defaults in YMYL content.

Where this breaks first: YMYL verticals

Not every industry feels this problem equally. But if you work in any of these verticals, cultural SEO means risk management.

  • Finance: Regulators, tax logic, product naming, and ID formats. Wrong jurisdiction bleed means your AI-generated content isn’t just unhelpful — it may be noncompliant.
  • Legal: Rights language, jurisdiction references, and compliance frameworks. An LLM citing GDPR to a Mexican user isn’t being cautious. It’s being wrong.
  • Healthcare: National agencies, approved terminology, and safety messaging. Drug names, dosage conventions, and regulatory bodies differ across every market.
  • Ecommerce: Payment methods (Bizum ≠ OXXO), shipping norms, returns, and installment culture. When your market cues conflict, the system classifies you as “not for this market.” And in GEO, classification is destiny.

In these verticals, the cost of Global Spanish is a liability exposure, compliance failure, and E-E-A-T erosion that compounds across every AI-generated interaction.

Making it operational

Frameworks are only useful if they translate into Monday morning actions. Here’s how to operationalize cultural SEO:

Week 1: Baseline audit 

  • Re-run the Article 1 Spain vs. Mexico checks across your top five transactional queries.
  • Log mismatches (currency/format, jurisdiction, and register). This is your baseline.

Week 2-4: Technical foundation 

  • Fix hreflang, canonicals, and structured data.
  • Ensure each market page canonicalizes to itself, carries correct priceCurrency and addressCountry, and has areaServed declarations.
  • Remove any IP-based redirects that might block AI crawlers.

Month 2-3: Content differentiation 

  • Prioritize your highest-traffic market pages for transcreation.
  • Aim for at least 30% substantive content difference between regional variants — different examples, legal references, and local proof.

Month 3-6: Entity reinforcement 

  • Build market-specific authority signals: local media coverage, directory listings, and partnerships.
  • Ensure your knowledge graph presence is consistent and market-specific.

Ongoing: QA and governance 

  • Implement dialect stress tests across target markets.
  • Set up automated monitoring for jurisdiction bleed in any AI-generated or AI-surfaced content.
  • Establish an escalation path for YMYL content where market context can’t be confirmed.

Two metrics worth tracking from Day 1:

  • Market mismatch rate: Percentage of outputs with wrong jurisdiction, currency, or register.
  • Wrong-jurisdiction reference rate: Regulators or laws cited from the wrong country, YMYL pages only.

If you can measure those two consistently, you can prove the framework is working.

A note on what actually matters

Everyone’s talking about markdown formatting, llms.txt files, and structured data for AI. Some of that matters. But before chasing the latest optimization trick, review your:

  • Documentation. 
  • Help center
  • Knowledge base.
  • Product docs. 

That’s what LLMs are actually reading and what shapes whether an AI assistant recommends you or your competitor. If an LLM had to explain what your product does in the Mexican market based only on what’s public, would the answer be any good? 

If not, you don’t have an AI optimization problem. You have a documentation problem.

The fix? Sit down and write clear, market-specific docs that both humans and machines can understand.

If you want a more structured approach, I’ve put together a cultural SEO checklist for Hispanic markets covering technical signals, content signals, entity signals, retrieval constraints, and QA governance.

Try it yourself: 5 prompts, 2 markets

Before moving on, run these five prompts through any LLM — once specifying Spain, and once specifying Mexico. The differences in the output should be intentional, not accidental:

  • “Explain how to request an invoice for an online purchase.”
  • “What ID number do I need to register as a freelancer?”
  • “Write a returns policy snippet for a €49.99 / $49.99 product.”
  • “Customer support reply: delayed delivery (mention dates and currency).”
  • “Best prepaid mobile plan — budget option.”

If the answers are identical, the model is defaulting. If they differ but cite the wrong jurisdiction, you have a retrieval problem. Either way, now you know where to start.

A word of warning — for us

There’s an irony in this article that I don’t want to skip over.

We’re telling brands to stop treating Spanish as a monolith, build market-specific signals, and respect the difference between Madrid and Mexico City. 

Then we go back to our desks and use ChatGPT to do keyword research “in Spanish.” We generate content briefs with tools that have the exact same geo-inference failures we just diagnosed. We run audits with AI assistants that default to the same “Global Spanish” we’re warning our clients about.

If the tools we use every day carry this bias, then every output we produce risks inheriting it — unless we’re actively correcting for it. That means specifying the market context in every prompt. 

Don’t trust a “Spanish” keyword list that doesn’t distinguish between markets. Treat your own AI-assisted workflows with the same rigor you’d ask of your clients’ content architectures.

The “Global Spanish” problem is also in your own stack. If you’re not fixing it there first, you’re part of the pattern.

From global content to market-specific systems

The goal is to produce Spanish that is market-true. In 2026, “localized” is a systems milestone: routing, content, entities, retrieval, and QA all have to agree on the same country context — or the model will pick one for you.

If you want a definition of done for cultural SEO, it’s this: Spain and Mexico can ask the same question and get different answers for the right reasons — and your pages are the ones that stay eligible to be cited.

Stop translating. Start architecting.

Customers want personalized marketing. Why can’t most brands deliver? by Adobe

27 April 2026 at 15:00

Think about the last time you binged those true crime documentaries. The next time you opened your streaming app, the homepage likely shifted. Investigative series rose to the top. Maybe a notification alerted you when a new series dropped. Promotional emails highlighted only what you hadn’t watched. You didn’t see the data parsing or the decisioning behind it. You just looked forward to enjoying the next title.

That’s the standard. According to the Adobe 2025 AI and digital trends report , 71% of consumers want personalized — or personally relevant — offers and information, and 78% expect seamless experiences across channels. Yet fewer than half of brands consistently deliver.

The issue is structural. When customer data lives in disconnected systems, teams will struggle to align insight, timing, and execution quickly enough to take meaningful action. AI can’t magic the problem away. According to the Adobe 2026 AI and digital trends report, fewer than half of organizations say their data foundation is adequate to support AI at scale.

At the initial stages of the modernization journey, the path to personalization can feel daunting. But progress will be easier than you think when you introduce a foundation for a unified customer experience.

The real barrier to personalization: Disconnected journeys

Most brands have plenty of data. It’s cohesion they lack. Your marketing team likely runs email, web, mobile, paid media, support, and even in-person channels. Each collects important signals, but are they sharing context across channels fast enough to shape the next interaction?

If not, impact is immediate. A customer browses a product online, then receives an email with a different price. Or a subscriber contacts support and has to repeat their story to multiple team members before getting help. Or a loyal customer happily purchases your product—only to see the same ads promoting it in their feed for weeks after.

Even minor bumps along the customer journey chip away at trust. Nearly half of customers say they disengage when promotions feel irrelevant or mistimed.

Delivering a unified customer experience requires continuously updating your understanding of each customer and then immediately sharing that insight across every department and touchpoint.

This can require substantial change. But taking the following steps makes the path ahead more straightforward:

Step 1: Build a unified customer profile

A unified experience starts with a single, living view of the customer.

Instead of keeping separate records for each channel, create a dynamic profile that reflects behavior, preferences, and history across all departments as customer activity happens in real time. Every click, purchase, service interaction, and loyalty update should feed into the same source of truth.

With that information, customer segmentation becomes smarter and messaging becomes more relevant. Customers stop receiving duplicative or contradictory communications. And performance can be more accurately measured across the full lifecycle.

This shift moves your marketing strategy from channel and campaign management to customer-first engagement. With a unified profile in place, teams respond to customers as individuals, not isolated events. 

Step 2: Connect insights to activation in real time

Accurate data doesn’t create value on its own. Those behavior signals must trigger action to shape meaningful engagement. Cart abandonment should prompt a quick follow-up (but not too quickly). Product recommendations should reflect recent browsing and past purchases. Irrelevant offers should be removed entirely. Journeys should evolve as preferences change.

Relevance largely depends on timing and second chances don’t come easily. Results from a Cognition Neuroscience Research project show the brain processes digital advertising in less than 400 milliseconds. Customers decide almost instantly whether a message applies to them. If systems can’t recognize context and activate insight within that window, the moment passes — and so does the opportunity to connect.

AI supports this speed at scale. It identifies patterns in customer data, anticipates purchase intent, flags churn risk, and determines next-best actions within milliseconds. Its effectiveness, however, depends on accurate, unified data. Reliable inputs enable relevant outcomes.

Step 3: Scale securely in the cloud

Privacy expectations are rising, and protecting customer data is a top priority. As organizations unify more signals and activate them in real time, governance can’t be layered on later. It has to be built in from the start.

To sustain a unified customer experience at scale, organizations need a modern cloud foundation that allows teams to process and activate data where it lives, reduce latency, limit unnecessary movement, and strengthen security controls.

In the cloud, data ingestion and activation happen faster. Infrastructure grows alongside customer volume. Compliance frameworks are embedded, not bolted on. And technology teams spend less time maintaining custom connections and more time enabling innovation.

Make every interaction count

Personalization succeeds when brands are prepared for the right moment, not just the right message. When your data foundation is unified, activation happens in real time, infrastructure is more secure, and personalization stops feeling experimental. Instead, it becomes operational. And relevance becomes repeatable.

Adobe Experience Platform on Amazon Web Services (AWS) brings these elements together and simplifies execution for your teams. Adobe Experience Platform creates real-time customer profiles that power segmentation, analytics, and journey orchestration across touchpoints. Deployed natively on AWS, it runs on scalable infrastructure designed for speed, resilience, and security—while reducing technical maintenance and complexity.

Read the eBook, Capturing attention in the age of AI, to learn more about howAdobe and AWS provide the holistic view of your customer, which marketers need to deliver personalization, build retention, and increase customer lifetime value.

Or, if you’re ready to see specifically how Adobe and AWS can simplify your unique path to unified customer experiences, reach out and start the conversation today.

The latest jobs in search marketing

24 April 2026 at 22:54
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Landing that perfect SEO role starts long before the interview. It starts with getting past the digital gatekeepers standing between your resume and an actual human. And those gatekeepers are everywhere: research shows 75% to 98% of large employers use Applicant Tracking Systems (ATS) to screen resumes, and up to 75% of qualified candidates never […]
  • Search is changing fast, and not just inside Google. People discover brands on YouTube, TikTok, Reddit, Amazon, and now, of course, through AI tools like ChatGPT and Perplexity. If you want to stay valuable (and hireable) as an SEO in 2026, you need two things: That combination is how you go from “I know SEO” […]
  • The SEO landscape has evolved dramatically over the past decade, with professionals now commanding salaries ranging from $67,000 to $191,000, depending on their expertise and role. What once centered on keyword density and backlink quantity has transformed into a sophisticated discipline requiring technical chops, strategic thinking, and deep understanding of user behavior… and then we […]
  • You do not need a marketing degree or a fancy title (although each can help) to break into SEO. You need proof that you can actually move the needle. If you are willing to learn, ship real work, and show receipts, you can go from “no experience” to “getting paid to do SEO” in a […]
  • The SEO industry has evolved dramatically over the past decade, with specialized skills now commanding premium rates and attracting the most exciting projects. Many professionals find themselves at a crossroads, wondering whether to continue as generalists or focus their expertise on a specific area of search engine optimization. Making the transition from generalist to specialist […]
  • Trying to land an SEO role is not just “submit resume, get interview.” You are up against hundreds of applicants, most of whom can talk about title tags and content briefs. The resume is the filter. If you blow it, you’ll never even make it to a human. Agencies are a different animal. You are […]
  • SEO is one of those careers people either love or eventually run screaming from. On any given day you are trying to understand what Google just changed, why a client’s traffic tanked, and whether that one dev ticket from March is ever going to get shipped. You are expected to be technical, creative, political, and […]
  • How to Seamlessly Integrate AI Skills into Your Resume AI is not a side note anymore. It is baked into how companies work, hire, and grow. If you can use AI to work smarter, that belongs on your resume. In a lot of cases it is the difference between getting an interview and getting filtered […]
  • The competition for top SEO talent has reached unprecedented levels, with companies scrambling to attract professionals who can navigate the ever-changing landscape of search algorithms and AI/LLM visibility. Writing a job description that stands out requires more than listing responsibilities—it demands a strategic approach that speaks directly to qualified candidates while ensuring maximum visibility in […]
  • In the world of digital marketing, careers in SEO (Search Engine Optimization) and PPC (Pay-Per-Click) advertising often converge and overlap. While the skill sets may differ, the ultimate goal remains the same: getting brands in front of the right audience at the right time.  Whether you’re new to the field, looking to switch disciplines, or […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • THE·TEAM operates at the epicenter of sports, music and entertainment, serving talent, brands and properties on a global scale. Our brands and properties division works with iconic brands and rights holders, supporting business growth through all marketing disciplines. We’re a trusted partner to every major league, team and venue, building meaningful connections between brands, properties […]
  • Job Context The North America Director of Growth Marketing is the single-threaded owner of regional growth outcomes, accountable for end-to-end strategy, budget, and delivery of MQLs, pipeline, and efficient growth across assigned business units. This is a general manager–style leadership role that partners closely with Sales, Product, Finance, and shared services to drive scalable, high-performing […]
  • Vice President of Paid Media Overview The VP, Paid Media owns a ~$15M P&L spanning three pillars: Paid Media, Programmatic, and Creative. This role reports to the Managing Director, Performance, and is accountable for client outcomes, retention, and quality of work across all three pillars. The VP leads through Associate Directors who manage billable consultants […]
  • The Opportunity  Adobe is seeking a Group Manager, Growth Marketing (Product-Led Growth & Experimentation) to lead and scale a high-impact team responsible for in-product experimentation across Acrobat. This role sits within the Retention and Value Discovery Product Marketing team and is at the center of driving material ARR & engagement impact through rigorous experimentation, targeted messaging, and deep partnership with Product […]
  • Directive Consulting is the performance marketing agency for SaaS and Tech companies. We use Customer Generation (a marketing methodology developed by us) which focuses on SQLs and Customers instead of traditional metrics like MQLs. We offer Paid Media, SEO/Content, CRO, and Video to our clients by creating comprehensive digital marketing strategies that allow our clients […]

Other roles you may be interested in

Manager, SEO, KINESSO (Hyrid, New York, NY)

  • Salary: $90,000 – $95,000
  • Manage senior analysts and help analysts grow into the next level of their career.
  • Translate clients’ business goals and marketing objectives into successful search engine optimization strategies.

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $115,000 – $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Senior Marketing Manager, Vanguard Renewables (Remote)

  • Salary: $120,000 – $182,000
  • Work closely with CMO and RNG team to develop and execute a strategic marketing roadmap aligned with business priorities.
  • Serve as the primary marketing liaison for RNG team, acting as the connective tissue between the Marketing and Commercial groups.

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌