Normal view

Today — 5 May 2026Search Engine Land

ChatGPT ads show strong early CTRs — but scale is still the question

5 May 2026 at 19:24
ChatGPT growth

Initial reports from SimilarWeb indicate ChatGPT ads are outperforming traditional benchmarks on engagement — but with limited inventory and small-scale tests, it’s too early to call this a long-term trend.

What’s happening. According to early analysis, ads appearing in ChatGPT conversations are generating strong click-through rates vs Display and Podcast channels, likely driven by high-intent user queries and the native way ads are integrated into responses.

Unlike traditional search ads, these placements appear directly within conversational answers, making them feel more contextual and less disruptive.

Why we care . If these early CTRs hold at scale, ChatGPT could become a serious performance channel — especially for advertisers looking to reach users at the moment of intent.

But there’s a catch: inventory is still limited, and early performance often looks better before wider rollout introduces more competition and variability.

Between the lines. High CTRs don’t necessarily mean high performance. Conversion quality, cost efficiency and scalability will ultimately determine whether ChatGPT ads can compete with established platforms like Google Ads.

There’s also the novelty factor — users may be more likely to engage simply because the format is new.

Zoom in. Some categories are already showing stronger signals than others.

Mother’s Day-related prompts are far more likely to trigger ads—about three times more than average—because they signal strong purchase intent, with brands like Etsy, Nordstrom and flower retailers already showing strong visibility.

What to watch:

  • Whether CTRs hold as inventory expands
  • How conversion rates compare to search and social
  • If pricing models evolve beyond early testing phases

Bottom line. ChatGPT ads are off to a strong start on engagement — but until scale, cost and conversion data catch up, advertisers should treat this as a promising test channel, not a proven one.

Dig deeper. Advertising in AI: Insights from Real User Behavior

The 10-gate AI search pipeline: Find where your content fails

5 May 2026 at 18:37
The 10-gate AI search pipeline- Find where your content fails

The AI engine pipeline has 10 gates between your content and a recommendation: 

  • Discovered. 
  • Selected. 
  • Crawled. 
  • Rendered. 
  • Indexed. 
  • Annotated. 
  • Recruited. 
  • Grounded. 
  • Displayed.
  • Won. 

Confidence at each gate multiplies, which means your worst gate sets your ceiling, and a single near-zero anywhere in the chain drags the whole result down with it.

That dynamic leads to a simple rule. The “Straight C” principle: in any multiplicative system, the weakest stage sets the ceiling for the entire system, and the highest-leverage fix is always the near-zero, not the near-perfect.

Brent D. Payne nailed it in Sydney in 2019: “better to be a straight C student than three As and an F.” Gary Illyes had been sketching out Google’s multiplicative ranking model, and I scribbled the lot from memory on split beer mats while everyone else went to the bar for another round. The principle stuck with me even though the beer mats didn’t.

Applied to the 10-gate pipeline, the principle makes the work order obvious: find your F grades, fix them first, then find your D grades, and only then worry about pushing your other gates from C to B to A. Below, I’ll walk you through how to identify the weak gates and prioritize them by scope.

The pipeline runs in two phases with different logic

Phase 1 (discovered through indexed) is infrastructure- and bot-centric. It’s mostly pass or fail: either the system has your content, or it doesn’t. The fixes are technical and well-documented: sitemaps, structured data, rendering, and quality signals.

Phase 2 (annotated through won) is competitive and algorithm-centric. Your content is measured against every alternative the system has for the user’s needs.

Passing all five gates in Phase 1 means the system has your content in stock. Winning Phase 2 end to end means the system chooses you over your competition.

Each stall pattern points to its fix

Fix what’s weak. In DSCRI, the fixes are mechanical, and success is relatively easy to measure. 

In ARGDW, the fixes are less obvious, more indirect, and the cause-and-effect relationship is harder to demonstrate. That’s why so many brands and practitioners focus too much on mechanical fixes and not enough on competitive ones.

Each of the 10 gates is a place where the pipeline can stall. These are some suggestions, absolutely not exhaustive: use the strategies you already know, too.

No.Gate nameStallFirst-party (Entity Home Website)Second-party (semi-controlled)Third-party (independent)
1DiscoveredBots never find the contentSitemaps, IndexNow, internal linking, and inbound linksLink from your Entity Home Website with clear anchor textOutbound links from owned properties and second-party content
2SelectedFound but ignoredInternal links, inbound links, anchor text, content around links, and Publisher and Author N-E-E-A-T-TAnchor text, content around the link, and link back to your Entity Home for contextOutbound links from owned properties and second-party content, anchor text, and content around the link
3CrawledRetrieval failsServer performance, redirect chains, pruning, and canonicalsChoose reliable platforms; keep URLs clean and stablePrioritize coverage on sites with strong crawl reputation
4RenderedRetrieved, but the system can’t process itServer-side rendering, reduce external resources, and JavaScript disciplineUse platform-native formatting; avoid embeds that block renderPrioritize coverage on properly rendered sites
5IndexedRendered, but not storedSite structure, content quality, pruning, and canonicalizationContent quality and original perspectivesPrioritize coverage on fully indexed sites
6AnnotatedInaccurate, low-confidence annotationsHTML5, structured data, schema markup, site structure, content quality, and unambiguous entity signalsUnambiguous entity signals, and link to your Entity Home for disambiguationOutreach to clarify entity references, clear anchor text from your owned properties and second-party content
7RecruitedMissing from one or more layers of the Algorithmic TrinityProvide what each layer wants: recency, originality, clarity, information gaps, helpful framing, etc.Fresh perspectives, original content, and regular updatesOutreach for coverage and updates from news, trade, and industry sites
8GroundedNot selected as a reference for the topic (not Top of Algorithmic Mind)Entity identity optimization, Publisher and Author N-E-E-A-T-T, and explicitly connect claims to proofConsistency of identity, credibility signals, and link claims to proofOutreach for citations from authoritative sources, and build N-E-E-A-T-T through coverage
9DisplayedNot chosen as part of relevant answers in the funnelClose the Framing Gap at each UCD layer, improve brand N-E-E-A-T-TFrame content to match each UCD layerOutreach for coverage that closes the Framing Gap, improve N-E-E-A-T-T through external corroboration
10WonThe page was the recommendation, but didn’t get the click, the citation, or the actionWrite copy, titles, and descriptions that are easy for the algorithm to extract intact; frame claims so the algorithm can respect the brand narrative without rewriting it; educate the algorithm on the brand narrative so it doesn’t distort itUse platform fields the algorithm will lift verbatim (titles, summaries, intros), and keep brand narrative consistent across every propertyBrief publishers and partners on your brand narrative so coverage frames claims the way you’d frame them yourself, and correct distorted coverage at source

Reading the table: Across the rows, infrastructure fixes (Gates 1 to 5) are specific, technical, and often binary, while competitive fixes (Gates 6 to 9) point at larger bodies of work (graph presence, proof connection, and framing gap closure) that are strategic rather than technical. 

Down the columns, your direct leverage drops as ownership drops:

  • On first-party, you can fix anything.
  • On second-party, you control content but not infrastructure.
  • On third-party, your only real moves are outreach and the links you point at the property. 

The further into the pipeline the stall sits, and the further from the entity home website it sits, the more the fix becomes about positioning rather than engineering. 

You can buy your way through DSCRI. You have to earn your way through ARGD. Won is its own case. By the time the algorithm reaches won, it has either understood your brand narrative or it hasn’t. 

If it has, it respects your titles, your descriptions, and your framing, and the click or citation lands the way you wanted. If it hasn’t understood you fully, it rewrites you, and the rewrite won’t be your framing. Assuming your copywriting is top-notch, that’ll lose clients you should have won.

Educating the algorithm on the brand narrative is the work that decides which of those two outcomes you get, and the work happens across your digital footprint, over time (ongoing), and at every gate.

Work outside-in, because most of what you need already exists

The pipeline runs at three scopes simultaneously — per item, sitewide, and web wide. Every gate operates at all three. You can’t work on them simultaneously, which means the order you pick is the single biggest decision in the project, and most brands pick the wrong one because they’re watching their competitors instead of the structure.

Here’s a simple fact most brands miss: most of what you need is already in place. 

  • You already have claims (you own a website, you’ve published positioning, you’ve explained who you are and what you do). 
  • You already have proof (clients have written testimonials, journalists have covered you, partners have referenced you, conferences have programmed you). 

The two layers exist, they’re just not connected. Joining the dots between existing claims and existing proof is the biggest single piece of leverage available to almost any brand. 

Almost nobody is doing it systematically because they’re too busy creating new content from scratch. When I say “join the dots,” that means both bi-directional linking and framing (which I covered in “The framing gap: Why AI can’t position your brand”).

That insight reorders the work. The right sequence is outside-in, and it lines up with claim, prove, and frame at the scope level.

Sitewide first

Get your claims structurally consistent at scale. Templates make it easy for bots to digest your site only if they’re consistent. Get the templates right, and the content taken as a whole reads clearly. 

Make sure the categorization is logical, the schema is uniform, the internal linking pattern is predictable, and the HTML5 is built to help bots perform chunking that produces high-confidence, well-bounded representations of every part of every page. 

Get the templates wrong, and the algorithms annotate everything with low confidence because the chunking was bad, the categorization was illogical, and the structural signals contradicted each other. That’s a sitewide weakness that the content carries through. This is cascading confidence at scope level.

Content is the input, context is what the templates supply, and confidence is what the system produces when context is consistent enough to make sense of the content. Start at the site level because that’s where the cascade either begins clean or collapses before it starts.

Dig deeper: The funnel flip: Why AI forces a bottom-up acquisition strategy

Web-wide second

Connect the dots to the existing proof. Once your owned property is making consistent, machine-legible claims, the second- and third-party footprint is where those claims get corroborated. 

The work here is mostly auditing, not creating: independent journalists who’ve already covered you, client testimonials sitting on client domains, conference programs that name you, partner mentions, and third-party reviews that already exist. 

This is the prove layer, and the leverage is enormous because your competitors are mostly not doing it. They’re watching each other’s websites while the independent layer that actually decides who AI recommends sits unattended on the open web. So, update what you can, and insert bi-directional links strategically to “connect the dots physically.”

Per item last

Frame the connection between claim and proof. Once sitewide claims are clean and web-wide proof is surfaced, it’s time to bring it all together in individual items. 

Per-item work builds the relational bridge between specific claims and the evidence. It’s up to you to provide the interpretive frame that tells the algorithms how to read the connection and closes the framing gap one page at a time. 

Framing only earns its full return once the two layers underneath are solid, because the frame is the connection between things that already exist, and there’s nothing to connect if the claim is incoherent or the proof hasn’t been surfaced.

Fix the earliest broken gate first, or the fix downstream does nothing

The pipeline is sequential. Each gate’s output is the next gate’s input. 

First job: get content flowing through every gate without an absolute fail at any point. If discovery is broken, improving your annotation does nothing because your content never reaches annotation. 

The rule is simple: find your earliest failing gate, fix it, then re-measure everything downstream on the improved signal. Fixing gates out of order wastes budget because the bottleneck hasn’t moved. I filed a patent for the technical implementation of this principle, but the principle itself doesn’t need the patent — it’s how any sequential system works.

Once nothing is absolutely failing, start fixing the weakest gates one by one, from weakest to strongest, to maximize the effect of each fix on the signal that flows through everything downstream. 

If rendering drops 50% of your useful content, every downstream gate inherits the damage, no matter how strong your competitive positioning is. Push that up to 100%, and you’ve doubled the signal for everything that follows.

Below are potential stalls at each gate (single page) with examples of fixes.

No.StallProblemPossible fix
1Not DiscoveredOrphaned article about your brand on Poodle Parlours in Paris MonthlyCreate a dedicated page on poodleparlour.paris with a TL;DR of the article (use the opportunity to close the Framing Gap), add the publication name, author, date, and an outbound link to the article
2Not SelectedThe 600th episode of your podcast on your website is ignored by bots despite a link from the paginationLink to it from the homepage, make the anchor text explicit (not “listen here”), and add the link to the YouTube version description
3Not CrawledPage load time is slow at peak timesUpgrade hosting and use a CDN
4Not RenderedSchema isn’t being ingested by the LLM botsMove schema inline, or, if that isn’t possible, add the same data to an HTML table on the page
5Not IndexedRendered, but not storedSite structure, content quality, HTML5, and schema markup
6Badly AnnotatedInaccurate, low-confidence annotationsHTML5, structured data, schema markup, site structure, content quality, and unambiguous entity signals
7Not RecruitedMissing from one or more layers of the Algorithmic TrinityProvide what each layer wants: recency, originality, clarity, information gaps, helpful framing, etc.
8Not GroundedNot selected as a reference for the topics (not Top of Algorithmic Mind)Entity identity optimization, Publisher and Author N-E-E-A-T-T, and explicitly connect claims to proof
9Not DisplayedNot chosen as part of relevant answers in the funnelClose the Framing Gap at each funnel layer (Understandability, Credibility, Deliverability), and improve brand N-E-E-A-T-T
10Not WonThe page was the recommendation, but the algorithm rewrote your title and descriptionImprove brand Understandability of the brand narrative and framing, tighten the title, description, and intro so the algorithm extracts your version intact rather than rewriting it; these remain the most visible elements at the zero-sum moment in AI

Reading the table: gate-by-gate example issues at item level. I provide some suggested solutions for each. You’ll see that many of the fixes are actions you’d take at sitewide or web-wide scope, which is the point. 

Scope determines whether the fix touches one URL or thousands, but the underlying mechanism at each gate is identical. Per-item work is where the fixes get specific, but the patterns repeat.

Get the newsletter search marketers rely on.


The authoritative entity advantage compounds across the competitive gates

One strategy will improve your grade at almost every gate in the AI engine pipeline: entity optimization. 

When your brand entity is fuzzy across the three graphs (document, concept, and entity), actively optimizing the entity identity improves clarity, focus, and confidence at almost every gate.

But the advantage you’ll gain isn’t uniform: at the infrastructure gates it does little, but from annotation onward, it will make a huge competitive difference.

Here’s the authoritative entity advantage at each pipeline gate.

No.StallThe authoritative entity advantage
1Not discoveredMarginal. A recognized entity in an outbound link from a third party is slightly easier to identify and trace, but discovery itself is infrastructure-driven.
2Not selectedSignificant. A recognized, trusted entity in anchor text (or near the link) increases the probability of selection.
3Not crawledNone. Crawling is purely server, redirect, and rate-limit mechanics.
4Not renderedNone. Rendering is purely technical processing.
5Not indexedModerate. Entity clarity helps the system make canonicalization and deduplication calls with confidence; fuzzy entities produce fuzzy storage decisions.
6Badly annotatedMajor. Entity confidence is the foundation of accurate annotation. A fuzzy entity produces low-confidence, often inaccurate annotations across every dimension. A clear entity produces clean, high-confidence annotations.
7Not recruitedMajor. Recruitment into the entity graph, document graph, and concept graph is entity-driven. Clear entities get recruited — fuzzy ones get passed over for clearer alternatives.
8Not groundedMajor. Top of algorithmic mind is entity-driven: topical ownership, N-E-E-A-T-T, knowledge graph presence, and more. The system grounds in references it trusts.
9Not displayedSignificant. Entity recognition reduces hedging at display. The system speaks confidently about entities it understands well and hedges on the ones it doesn’t.
10Not wonMajor. Entity confidence decides whether the algorithm respects your brand narrative or rewrites it. High confidence means titles, descriptions, and framings get extracted intact. Low confidence means the algorithm fills in the gaps from training data, and that won’t be the narrative you carefully crafted.

Reading the table: entity advantage is zero or marginal at Gates 1 to 5 (infrastructure), then carries the heaviest load through Gates 6 to 9 (the competitive phase). At won, it’s the mechanism that decides whether the algorithm respects your brand narrative or rewrites it.

This is the most underrated insight in the whole diagnostic. Optimizing any single gate gives you one gate’s worth of improvement. Optimizing the entity gives you compounding improvement across all five gates from annotated through won, which is why entity-led optimization outperforms page-led or keyword-led optimization in AI search.

The authoritative entity advantage names that compounding effect, and it’s the structural reason brands whose entities remain fuzzy pay a confidence tax at every competitive gate.

Before you create anything new, audit what you already have

Once you know which gate is failing, the first question to ask yourself isn’t “what do I need to create?” It’s “what do I already have that would fix this?” 

The content on your website already makes most of the claims you need, but they are not presented clearly and consistently. Then, all brands have more existing proof than they’re fully leveraging.

Look at things like conference programs, client case studies, trade publications, podcasts, social media, reviews, and third-party mentions. There might be a lot that you have never explicitly connected back to your brand.

Audit-first beats create-first on every metric that matters. Audit-first is cheap and fast. Create-first is expensive and slow.

The diagnostic tells you which gate needs the work, the audit tells you what you already own that could do the work, and the audit also tells you where the genuine gaps are, so when you do create something new, you’re filling a gap the diagnostic identified rather than guessing.

That principle drives the temporal triad: ROPI, ROI, ROFI.

The temporal triad turns the diagnostic into a working plan: ROPI, ROI, and ROFI

  • Return on past investment (ROPI) is the audit-first work itself: linking existing claims on your website to existing proof scattered across your digital footprint so the assets you’ve already paid for start paying you back. It’s the cheapest, fastest, and almost always the highest-leverage move available, because the asset has already been built and you’re paying only for the connection.
  • Return on investment (ROI) is the present-tense work: expanding on content that’s already live, filling the gaps the audit reveals, and creating new pieces in the short term to support what you’re doing today. This is the layer most brands jump to first, and it’s the most expensive of the three when run in isolation, because new creation without ROPI underneath means you’re paying full price to build assets that are already partially in place.
  • Return on future investment (ROFI) is the planning layer, and it’s where brand strategy and pipeline strategy converge. If you have a clear sense of where the business is going (which categories you’ll own in three years, which positioning you’ll claim, which framings you’ll need supporting evidence for), you can plant seeds today that won’t serve you this quarter but will be load-bearing in 12 or 24 months.

At my company, we plant seeds constantly: claims and framings published now that aren’t doing visible work today but will be the corroborated proof we’ll need when the next phase of our long-term strategy rolls out. The brand that runs ROFI consistently is shaping the frame against which competitors will be measured in the future.

Because you’re educating and training the algorithms, ROFI actually influences the criteria by which the market will judge you in your favor.

Three time horizons for your content (wherever it lives online): ROPI extracts value from what you’ve already built, ROI improves the present, and ROFI engineers the future.

The same diagnostic works across every AI engine

The 10 gates describe what search engines, assistive engines, and assistive agents actually do, in order, every time they decide whether to recommend you. 

Crawl, index, rank was the right model for a 1998 search engine. It hasn’t been the right model for a long time. The brands that are still optimizing for three steps when the systems run on 10 are optimizing for a model that the engines don’t use.

This isn’t my framework. It’s the engines’ framework.

The engines don’t care what you find easy to measure, fun to do, or impressive at the next conference. They care whether your content survives all 10 gates with high confidence at each, and they reward the brands that build for the gates with citations, recommendations, and the actions that follow.

So treat and run it like a system. Fix your F grades first and your D grades next. Work outside-in because that’s where the leverage already lives, and watch the rest compound on top of work you’ve barely had to pay for. 

Follow the system, and AI search pays you back, year on year, engine after engine, long past the lifespan of any acronym fashion.

Web Bot Auth, Google’s new experimental method to validate authentic bots

5 May 2026 at 17:25

Google is trying a new method of bot authentication named Web Bot Auth. Google posted a new help document that explains that Web Bot Auth is a “new cryptographic protocol that helps websites to validate that bots are authentic.”

The goal of Web Bot Auth is to help you automate the process of authenticating which AI Agent bots are authentic and which are fraud.

Limited test. Google said the search compan is “testing the protocol with some AI agents hosted on Google infrastructure.” Not all Google user agents are using Web Bot Auth and Google is not yet signing every request of agents using the protocol.

Thus Google recommends that in addition to Web Bot Auth you continue relying on IP addresses, reverse DNS, and user-agent strings as Google gradually rolls out signed traffic.

What is Web Bot Auth. Google defined Web Bot Auth as “Web Bot Auth is an experimental cryptographic protocol used to authenticate requests sent by bots. Instead of relying solely on self-reported headers and IP addresses, Web Bot Auth allows agents to cryptographically sign their requests.”

Web Bot Auth can bring the following benefits according to Google:

  • Future-proofing: Help establish a web where agent providers and websites can build mutual trust and make informed access decisions.
  • Cryptographic certainty: Move beyond easily spoofed headers to a verified identity and decouple agent identity from IP addresses.
  • Better observability: Gain clearer insights into how agents interact with your content.

Why we care. As AI Agents become more and more common across the web, managing which Agents can access your site and web pages may become more and more of a challenge. This new method of authentication may help you allow authentic AI Agents and block the inauthentic AI Agents.

Again, this is an “experimental” feature right now, so keep track of its progress.

Query intent vs. conversion intent: Why the difference matters

5 May 2026 at 17:00
Query intent vs. conversion intent- Why the difference matters

One of the major reasons PPC practitioners hold onto syntax-oriented keyword strategies is the disconnect between “query intent” and “conversion intent.” For years, you’ve likely relied on keywords to show you understand what your customers want and to prequalify traffic using syntax-oriented signals.

As user behavior shifts to more conversational queries and AI becomes an increasingly relevant part of the user journey, the distinction between these two intents becomes even more critical to understand and act on.

Here, we’ll define query and conversion intent and explore strategies to apply them effectively. This isn’t prescriptive. You should make decisions based on what will serve your business well. However, it provides a framework for analyzing your data and optimizing for the right humans.

Disclosure: I’m a Microsoft employee, and I’ll be sharing some examples that pull from Microsoft tooling. However, most of the strategies reflect platform-agnostic approaches.

What are query and conversion intents?

Query intent is the underlying need driving the text put into a search function. This search function can be on a SERP (search engine results page), video/social/gaming/email/site search bar, or AI surface.

Conversion intent is the human need to achieve some outcome, understood through stated and inferred data points. These range from text entered in various search experiences, content consumed, and tracked actions taken.

Different examples of query and conversion intent will have higher or lower rates of confidence based on how explicit text is, as well as patterns in content consumed.

For example, if I search “Microsoft ads login,” both query and conversion intent are clear — I want to log in. It’s easy to match ads and organic content to that query. Videos shown in any video query would have to do with logging in, and emails would be focused around login information.

Google SERP

Bing’s SERP

YouTube results

The query “Microsoft ads” is more nebulous, as such, needs to draw from other signals like previously engaged content and search history. While I might get a login page, I’d likely also see blog/sales content, third-party advice on Microsoft ads, and potentially competitor info trying to capitalize on the general nature of the query.

Google SERP

Bing SERP

YouTube results

Let’s look at a non-branded example as well. “Purple hair dye” has a clear transactional intent. While the user might not have a brand in mind, they know they want a specific color. 

We don’t know if the user is looking for a semi-permanent or permanent color. We also don’t know the user’s pronouns, so matching them to a specific demographic to entice a purchase is a gamble. 

Google SERP

Bing SERP

YouTube results

In the query “purple hair dye for long wavy hair,” the transactional intent is maintained. However, the query focuses more on the core needs of the person behind the text. Long, wavy hair means there needs to be enough dye to cover long hair.

Additionally, while some men have long wavy hair, the person behind the query is more likely to identify as female. 

Wavy hair has a different composition than straight or curly hair, so products specifically for wavy hair will be more relevant than those without hair type identifiers.

Google SERP

Bing SERP

YouTube results

In all of these examples, there was clear conversion intent. The human behind the query clearly wanted to achieve something. However, if we relied only on the text (i.e., query intent), we might miss a meaningful opportunity to connect with customers. 

This is why close variants (which have been available on both Google and Microsoft for ~10 years) represent a useful way to unshackle ourselves from syntax alone.

Additionally, by limiting our understanding of queries to SERPs, we ignore critical insights from where our customers connect, work, and play. Microsoft’s internal data from March 2024 shows that brands that use both Audience ads (display, native, and video) and Search see a 6x conversion rate. Part of this is brand recognition, and the power of brand media buys influencing performance.

Yet there’s also the pragmatic piece that some marketers refuse to engage with video and social. By being where your competitors refuse to be, you can shape and capture desire while they fight over a shrinking share of voice.

Get the newsletter search marketers rely on.


How to optimize for each intent

Once you understand the difference between query and conversion intent, you can begin mapping out the actions needed to capitalize on both.

Conversion intent is much easier to understand than query intent. This is why AI systems typically run queries in the background to understand human input and get at the conversion intent behind the query. 

To succeed at shaping queries and capturing conversions, it’s critical to understand the input points for humans and the AI systems that will be serving them results.

Let’s revisit the “purple hair dye for long wavy hair” query:

Copilot surfaces how it arrived at the output by looking up information and finding the best matches. This is similar to the SEO concept of E-E-A-T.

Yet you’ll notice that the results for my personal Copilot are different than the traditional SERP (chiefly that ads aren’t the dominant result — ads serve at the bottom of clearly transactional conversations after organic listings).

This is where the “Details” function comes into play and can help you know where to focus content, feed, and messaging functions:

This product is pretty flat on price, save for some deep summer dips. If I’m desperate for color, I might buy now, or I might wait for what seems like a regular summer sale. I’m also getting insights into why this product is wonderful (hair conditioning, cruelty-free, vibrant, and customizable color, etc.).

These are things I’ve shown interest in through past purchases, conversations with Copilot, and other signals it has access to.

Brands that want to optimize for query intent need to make sure the following are in good order:

  • Feed/landing page clarity
    • It should be incredibly easy to map what the product/service is to the query. While there is value in some 1:1 matching of language, it’s much more important that the core offering be understood as aligned with what the human is looking for.
    • For example, DUI and DWI are technically two different charges and have geo implications. However, DUI tends to be the universal legal charge and service.
  • Images adding context
    • Visual content is critical to engage humans. However, if the image isn’t clear or is duplicative of another service/product page, you might confuse the user and the machine attempting to understand and position you for queries. This is why it’s critical to add alt text (even on paid landing pages) for images and videos.
    • A good way to test whether your visuals are serving you well is to put the landing page into a PMax campaign creator. If you see the images and they match the correct service text, you’ve done a good job.
  • Invest time in understanding how humans and AI are querying
    • Free tools like Google Trends, Microsoft Clarity, and Bing Webmaster offer insights into search trends, citations, grounding queries, and which AI systems and humans are successfully engaging with your content.

Conversion intent is more straightforward, though debatably harder because it requires more creative and critical thinking: 

  • Matching messages to personas
    • The reason one person says yes to you might be completely different from the reason someone else does. Locking in conversion intent includes being mindful of how you’re selling yourself. If you ignore what matters to your customers in reviews, intake from customer success or sales, and other signals, you risk selling yourself badly and losing the customer.
    • This is where AI-powered creative and audience mapping can be helpful, since platforms have access to more insights than a brand does during the auction.
  • Honor the impulse nature of visual content
    • Someone coming to you from a display spot or short video is very different than someone coming from a text-laden SERP. They were inspired to act and need frictionless paths to conversion.
    • One-click checkout (including solutions like Copilot Checkout) ensures humans don’t need to think to do business with you.

Ultimately, both query and conversion intent need brand and performance marketing to be successful, and it’s critical to understand how the success metrics manifest.

The converging roles of brand and performance

For a long time, brand and performance marketing were treated as separate motions, with separate owners, budgets, and success metrics. 

  • Brand was about reach, recall, and long-term connection. 
  • Performance was about efficiency, conversion rate, and immediate return. 

That separation made sense when channels, measurement, and user journeys were cleaner than they are today. It’s much harder to maintain in an environment where AI systems infer intent continuously and across surfaces. 

A user doesn’t experience brand and performance as separate. They experience confidence, familiarity, relevance, and ease. Those signals are created over time through exposure, engagement, and trust, and they often determine whether conversion intent ever materializes, regardless of how “high intent” a query might appear on its own.

From a metrics perspective, this convergence is clear. Brand-oriented activity influences performance outcomes even when it isn’t the final touch. Exposure to display, native, or video doesn’t always produce an immediate click, but it changes how humans and systems interpret future behavior. 

When someone later performs a search, engages with an AI assistant, or compares options on a marketplace, prior brand interactions act as accelerators. They reduce hesitation, shorten decision cycles, and increase the likelihood that a conversion signal will be credited downstream.

From a strategy standpoint, this means brand work should no longer be evaluated solely on isolated upper-funnel KPIs, and Performance work can’t be evaluated purely on last-click efficiency. 

Audience-based formats, contextual placements, and visual storytelling directly shape conversion intent by shaping preferences and expectations before a query even occurs. Search and shopping formats then serve as capture mechanisms, translating that latent intent into action.

This is particularly relevant in AI-assisted experiences, where systems synthesize multiple inputs before presenting options or recommendations. Content, feeds, reviews, images, and historical engagement all influence how brands are represented and when they appear.

In these environments, strong brand signals don’t compete with performance outcomes. They enable them by making the brand easier to understand, trust, and choose.

Brand and performance don’t need to use the same tactics, but they must be planned together. Measurement frameworks should account for assistive value, not just final interactions.

Creative strategies should recognize that inspiration and conversion often happen at different moments. Optimization should focus less on forcing intent into rigid buckets and more on supporting the full decision journey.

When we recognize that query intent and conversion intent are related but not identical, the convergence of brand and performance becomes less a philosophical debate and more an operational necessity.

Success comes from designing systems that reflect how humans actually decide, not just how they type.

Key takeaways

  • Query intent describes what is said; conversion intent reflects what the human needs to accomplish. They overlap, but they aren’t interchangeable.
  • Brand activity shapes conversion intent long before a query is expressed and influences how future interactions are interpreted.
  • Performance outcomes improve when Brand signals reduce friction, uncertainty, and choice overload.
  • AI-driven experiences amplify this convergence by relying on cumulative signals rather than single actions.
  • Sustainable optimization requires aligning brand and performance strategies, metrics, and expectations around the same human outcomes.

How China’s fragmented search ecosystem is reshaping SEO in 2026

5 May 2026 at 16:00
How China’s fragmented search ecosystem is reshaping SEO in 2026

In February 2025, the world watched as a small group of humanoid robots took the stage at the CCTV Chinese New Year show for the very first time. It was a charming performance, even if the steps were shaky and the movements were mostly limited to the arms.

Just one year later, at the Spring Festival Gala, the shaky steps were gone and the humanoid robots were able to actually run and do standing somersaults and full kung fu routines with swords and nunchaku. The message was clear: in just one year, we have witnessed a decade’s worth of advancement.

The 10-year leap in technology is real and not limited to robotics. Which raises a critical question for every digital marketer eyeing the world’s largest web population: How has search in China progressed in recent years?

A parallel in the Chinese search landscape

The answer is that we’re witnessing the first, calculated tremors of a massive shift. AI models have not yet replaced traditional search. The evolution isn’t happening through a single “big bang,” but through a constant, iterative pulse. 

New LLM models are surfacing every few months, each more specialized than the last. Chinese tech giants are increasingly open-sourcing their models, and even industry leaders are hedging their bets. Baidu, for example, is integrating DeepSeek into its search experience, even as its own Ernie (Wenxin) model remains a formidable powerhouse.

Let’s look at how users actually search in China today — and what this nuanced shift from links to reasoning means for your 2026 SEO strategy.

The great narrative fallacy: Is web search dead in China?

In many marketing circles, a specific narrative has been repeated so often it has become an article of faith: “Traditional search on Baidu is dead — and has been for years. Websites are obsolete. In China, everything is WeChat.”

This narrative is almost always driven by service providers whose business models depend on WeChat, Douyin, Weibo, or Xiaohongshu marketing. To them, the “open web” is a ghost town. But is this actually true?

The social supremacy argument

There’s a grain of truth in the hype. The Chinese web is a mobile-first multiverse. Users access and explore the web through super-apps:

  • RedNote (Xiaohongshu / Little Red Book): This is the de facto engine for lifestyle research and travel planning.
  • Pinduoduo and Douyin: These are the juggernauts of social commerce and impulse buying.
  • WeChat: The absolute center of daily life, where everything from a quick message to a utility bill payment via QR code happens.

In this environment, social media isn’t just a channel. It’s the air people breathe. For B2C brands, social ads can — and often do — exceed website-driven sales by orders of magnitude.

The B2B reality check

For those of us working with B2B companies that need real visibility in China, the “Baidu is dead” narrative falls apart the moment you look at the analytics. Clients who invest in Baidu SEO and Baidu search engine advertising (SEA) continue to see a steady, high-volume stream of real human visitors — in many cases generating more qualified leads and higher conversion rates than their counterparts in the UK or Germany.

Why? Because when a B2B procurement officer or a technical engineer needs a specific industrial solution, they don’t just scroll until they find it on a social media feed. They search for a verified, authoritative source. In other words, they look for a website.

Is the social media narrative a lie? No. But ignoring a channel that — at least in the B2B sector — remains more effective in China than in many search-first Western countries is simply bad business. The goal isn’t to choose one over the other; it’s to understand how they coexist. 

And just as we’ve settled the debate between web marketing versus app marketing, a new challenger — the LLM — has entered the battleground to disrupt both.

Mapping the 2026 landscape: Intent-based specialization

To a Google-first marketer, the idea of searching anywhere but a search engine feels like a detour. In China, it’s the standard operating procedure. Users don’t just “Google it.” Instead, they choose the tool that fits the intent.

As a Baidu specialist living and working in China, I see this daily. While I might be optimizing a B2B landing page for Baidu, my wife is likely on Pinduoduo, finding household deals, or on Xiaohongshu, planning our next weekend trip. 

The “everything app” exists, but the “right app” always wins the click.

1. Traditional web search: The authority tier

Despite the “death of the web” narrative, traditional web search remains the primary battleground for B2B and high-authority research. If a user needs a technical whitepaper, a government regulation, or a verified corporate headquarters, they go here.

  • Baidu: Still the mobile heavyweight, with a ~70% mobile market share. Its structural advantage is massive: The Baidu app is installed on over 724 million monthly active devices (as of early 2026). It has evolved into an AI-first portal, but for SEOs, it remains the place where the open web lives and breathes.
  • Microsoft Bing: The professional’s sanctuary. It has claimed a massive chunk of desktop search for those seeking a cleaner, international, or technical experience.
  • Haosou (360 Search): The enterprise default, often pre-installed on corporate PCs and known for its security focus.
  • Sogou: Deeply integrated with WeChat, it’s the bridge between the walled garden and the web.
  • Google: Yes, Google. Despite the firewall, a significant population of tech-savvy professionals and researchers use it via VPN for global technical data and academic resources.

2. Social discovery: The inspiration tier

This is where search becomes discovery. Users don’t always have a keyword, but they do have an interest. In this context, SEO is about social indexing: ensuring your brand appears when a user looks for proof and not just products.

  • WeChat (Weixin): The internal search for official brand news and private traffic.
  • Xiaohongshu (RED): The ultimate product-discovery engine. If you aren’t on RED, you don’t exist in the lifestyle or luxury sectors.
  • Douyin: Visual, video-first search. Users search Douyin to see how something works.
  • Kuaishou: The powerhouse for lower-tier cities and raw, authentic grassroots content.
  • Weibo: Real-time search — what is happening right now in the public eye.
  • Bilibili: Long-form video search for deep dives, tutorials, and Gen Z subcultures.

3. Ecommerce: The transactional tier

In the West, users often start on Google and end on Amazon. In China, the journey frequently starts and ends in the same place.

  • Taobao / Tmall: The grand bazaar. If you want variety and brand stores, this is the first stop.
  • JD.com: The Amazon of China for logistics and high-end electronics.
  • Pinduoduo: The favorite for daily essentials and group-buy deals. Its search logic is entirely driven by value for money.
  • Douyin Mall: The rising star of “impulse search,” merging entertainment with immediate checkout.
  • Xianyu (Goofish): The go-to for the thriving second-hand market and hobbyist niches.

4. Generative AI (LLMs): The reasoning tier

This is the newest layer of the map — the “thinking” search. These AI models don’t just produce lists of links. They are assistants that synthesize the web for the user.

  • Doubao (ByteDance): Currently the most popular consumer AI assistant, used for casual, conversational queries.
  • DeepSeek (Domestic): The choice for developers and those in need of “deep thinking” logic. It’s the engine currently getting tested inside WeChat’s search bar.
  • Kimi (Moonshot AI): The king of long-context. Users use Kimi to search through 50-page PDFs or complex financial reports.
  • Qwen (Alibaba): Powerfully integrated into the Alibaba ecosystem for business and coding tasks.
  • Tencent Yuanbao: The “AI brain” for WeChat content.
  • Wen Xiaoyan (Baidu): The AI-facing evolution of Baidu search.

5. Hyper-local and logistics: The utility tier

For the physical world, search is about “now” and “near me.”

  • Meituan / Dianping: If you’re hungry or want to see a movie, you don’t use Baidu. You use Dianping for reviews and Meituan for transactions.
  • Amap (Gaode) / Baidu Maps: The “search engines of the real world.” SEO on these platforms is purely about point-of-interest (POI) optimization.
  • Ctrip (Trip.com) / Railway 12306: The specialized gates for the massive domestic travel market.

Get the newsletter search marketers rely on.


From mapping to maneuvering: The Baidu specialist’s edge

Baidu SEO isn’t dead; your website just isn’t the sole focus of web search anymore.

The ‘walled garden’ SERP: A decade of distraction

If you’re a Google-centric SEO, there are some notable differences when working with Baidu:

  • The ad-heavy layout: It isn’t uncommon to see ads claiming the top, middle, and bottom of a Baidu search engine results page (SERP), occupying nearly 50% of the visible real estate.
  • The Baidu monopoly: The most coveted organic positions are almost always reserved for Baidu’s own properties. Baidu Baike (the encyclopedia), Baidu Zhidao (the Q&A hub), and Baijiahao (the news/blogging arm) are the permanent residents of Page 1.
  • The portal giants: High-authority giants like Zhihu (China’s Quora), Bilibili, and Sohu take up whatever space is left.

Riding the Chinese SERP dragon

In this environment, ranking a corporate homepage for a high-volume keyword is a fool’s errand. Instead, we’ve mastered the art of the “long-tail dragon.”

In the West, we talk about the long tail of search as a small, niche opportunity. In China, with its linguistic complexity and massive user base, the long tail is a winding, multi-layered beast that is often more lucrative than the head terms. 

And we don’t just rank a website; we piggyback on the authority of the platforms Baidu already trusts. If you can’t beat Baidu Baike, you become the verified entry inside it.

Interestingly, it is these very platforms — the ones we’ve been using to bypass the “blue link problem” — that have now become the primary focus of the next generation of search.

What is changing in Baidu SEO?

In China, there is no brand loyalty toward particular AI models, as Westerners have toward platforms like ChatGPT and Claude.

The AI-switching reality

Chinese users are restless. They don’t stick with one model. They switch — sometimes because a hyped model hits a downtime wall, and sometimes because a new model claims the throne of the “most intelligent AI.” In this cycle of competition and user preference, an SEO can’t just focus on the “big sources.”

If you’re following the Western playbook, you’re likely chasing Reddit, Quora, and YouTube as your “sources of truth” for AI training. But in China, that focus is dangerously narrow. To win the reasoning battle, you must understand the investor-source connection.

Brainstorming the wisdom platforms

If you want to train AIs to see your brand in China, you have to look at the platforms they were built on:

  • Tencent is invested in Sogou. In 2021, Tencent fully privatized Sogou. This means Sogou Baike is no longer just a Baidu alternative — it is now a core training set for Tencent’s Yuanbao. If you ignore Sogou Baike, you’re invisible to the AI search bar inside WeChat.
  • Bytedance owns Baike.com. Bytedance bought Baike.com (formerly Hudong Baike) specifically to fuel its search ambitions. If you want to get cited by Doubao, your content needs to be mirrored here and not just on Baidu.
  • The neutral giants: Keep an eye on Zhihu. Because both Tencent and Baidu are heavy investors in Zhihu, it remains one of the few neutral high-authority sources that almost every Chinese LLM uses for opinionated or expert reasoning.

The new SEO commandment

We’re no longer just optimizing for a search engine. We’re optimizing for a data pedigree.

If your client is B2B, you might still prioritize the Baidu ecosystem. But if your client is in ecommerce and you aren’t feeding the Qwen engine via Alibaba’s ecosystem, or the Doubao engine via Baike.com, you’re limiting your visibility across key AI systems.

The 2026 China SEO/GEO blueprint: From keywords to semantic saturation

If you’re waiting for a “DeepSeek optimization checklist” or a “Doubao ranking guide,” you’ve already missed the point. Because users switch models as often as they switch takeout apps, you can’t afford to be “Baidu-only” or “WeChat-centric.”

Here is what’s actually working for SEO in China in 2026:

Optimize for citations and not just clicks

While SEO in the West is focused on generative engine optimization (GEO), in China, it’s all about fact density. 

  • The logic: When Kimi or DeepSeek performs a reasoning query, the AI looks for verifiable facts.
  • The tactic: Stop writing marketing fluff. Start using the inverted pyramid writing style. Lead with a direct, data-backed answer in your first paragraph. Use hard statistics, expert quotes, and structured lists. If a model can’t extract a fact from your content in 200 milliseconds, it might hallucinate a competitor’s data instead.

Build an entity moat across wisdom platforms

As we brainstormed earlier, every AI has a “parent” with a preferred data source. But since models are now open-sourcing their weights and distilling each other’s intelligence, your brand must achieve entity consistency.

  • The goal: Your brand name, headquarters, and core product claims must be identical across Baidu Baike (Baidu), Sogou Baike (Tencent), and Baike.com (ByteDance).
  • The result: When these models cross-check their reasoning, they find a consensus. In 2026, consensus is the new authority.

Leverage information gain

Chinese AI models have a well-observed recency bias — they prefer sources that are roughly 25% fresher than traditional search results.

  • The tactic: Don’t just regurgitate what’s already on Zhihu. Provide a “unique data slice.” If everyone says “The best time to post on Douyin is 6 PM,” and you publish a case study proving “11 AM is better for B2B industrial leads,” the AI will cite you as the “nuanced exception.” That citation is worth more than ten #1 rankings.

The era of the entity architect

We’ve come a long way from the shaky steps of the 2025 CCTV Gala.

In 2026, China’s search ecosystem is no longer a directory of links. It’s a living, reasoning entity.

For the Western search specialist, the lesson is clear: The “super app” was a distraction. The real story is the fragmentation of intent.

My wife still goes to Pinduoduo for the best price. My colleagues still go to Bing for technical sanctuary. And the “I, Robot” enthusiasts of 2026 are using a rotating door of LLMs to find their answers.

As a Baidu specialist, my job has shifted from “ranking a website” to “architecting an entity.” We no longer build for the bot; we build for the source. If you’re the undeniable source of truth across the platforms that shape China’s information ecosystem, it doesn’t matter which model delivers the answer.

You’ll be the one they’re cheering for.

Unifying the search experience for real growth in 2026 by Level Agency

5 May 2026 at 15:00

In February 2024, Gartner predicted that traditional search volume would drop 25% by 2026. It didn’t. Google’s search revenue accelerated to 17% year-over-year growth, crossing $63 billion in Q4 2025 alone. But clicks per search are falling while query volume explodes. The pie got bigger. The slices got redistributed. And most search teams are still optimizing for the old pie.

Are you still poring over spreadsheets full of organic keyword rankings like it’s 2003? Your customers don’t care where they’re getting their answers. They’re just looking for answers they can trust. And they’re finding those answers across more surfaces than your rank tracker knows exist.

If your organic strategy lives in one spreadsheet, your paid strategy in another, and your AI search strategy in a third (or nowhere), you’re optimizing for a search experience that no longer exists.

What “search” actually looks like now

Google “best tax software” right now. Go ahead, I’ll wait.

Count the surfaces on that single results page. Sponsored ads across the top. An AI Overview with its own recommendations and citations. A Reddit thread (because Google knows people trust other people more than brands). Organic listings from CNET, H&R Block, and others. A video carousel. Discussion forum links. A product carousel with images and prices. More sponsored results at the bottom. And a “People also search for” section feeding the next query.

That is one search. One keyword. And nobody owns it.

Now think about how different people actually use that page. I scroll past everything to find the Reddit thread, because I want to know what real humans recommend. My dad clicks the first sponsored ad because he doesn’t understand paid advertising (sorry, dad!) and just trusts Google to surface the best option up top. Someone else reads the AI Overview, gets a good-enough answer, and never clicks anything at all. A fourth person watches the Smart Family Money video and leaves.

Same query. Four completely different paths. Four different “winners.” And if you’re the brand celebrating a number-three organic ranking on this page, you may be missing that most of the real estate, and most of the user attention, lives somewhere other than those blue links.

This is what I mean by the total SERP experience. Your customer sees the whole page. You should too.

The AI layer changes the math

AI Overviews now appear on roughly 25% to 48% of Google queries, depending on the study. ChatGPT processes 2.5 billion prompts a day. Perplexity is up 239% year over year. These are real numbers from real platforms where real buyers are forming opinions about your brand, or not forming opinions because you’re nowhere to be found.

But before the panic sets in: AI tools still account for less than 1% of U.S. web traffic. Google sends 300x more referral traffic than all AI platforms combined. The sky isn’t falling, but the ground is shifting.

The shift that matters most is behavioral. Wynter’s 2026 research found 68% of B2B buyers now start their research in AI tools before they ever open Google. They ask ChatGPT to narrow the field, then Google the shortlist to validate. AI evaluates, Google verifies, and your website converts. If your brand is missing from that first AI conversation, you’re not even on the shortlist when the Googling starts.

Why the click data is more interesting than scary

A Search Engine Land analysis of 25 million organic impressions across 42 clients found organic CTR drops 61% when an AI Overview appears. In addition, paid CTR drops 68%.

EVERYBODY FREAK OUT!!! Right? Not quite.

Here’s what the panicked LinkedIn posts leave out: brands cited inside AI Overviews see 35% more organic clicks and 91% more paid clicks. Being in the AI Overview doesn’t cannibalize your traffic. If anything, it amplifies it. The AI Overview functions like a trust signal, a stamp of “this brand is relevant to your question” that makes people more likely to click your listing below.

The real twist, though, is that ranking well in organic doesn’t guarantee you show up in AI. Tom Capper’s research at Moz found 88% of AI Mode citations are NOT in the organic SERP for the same query. Organic and AI are pulling from different source pools. You can be number one in Google and completely invisible in ChatGPT’s answer to the same question.

And the small amount of traffic that does come from AI? It converts at more than quadruple the rate of organic, according to Semrush. These visitors arrive more informed, more intentional, and more ready to buy. Which makes sense, because they’ve already done the evaluation inside the AI interface. By the time they click, they’re just confirming and often converting.

The org chart is the problem

Most companies have SEO reporting to content, PPC reporting to demand gen, and AI search reporting to nobody. BrightEdge found 54% of organizations have handed AI search to the SEO team alone, which is a little like asking your plumber to also handle the electrical work because, hey, it’s all in the same house.

The waste from this setup is real. One branded Performance Max campaign paid roughly $500,000 for clicks that would have come through organic anyway. Google’s own research confirms: when you rank number one organically, only half your paid clicks are truly incremental. The other half? You bought what you already owned.

Meanwhile, McKinsey found that a brand’s own website makes up only 5% to 10% of the sources AI references. AI pulls from Reddit, review sites, affiliates, publishers, and user-generated content. You can have the best SEO program in your category and be completely absent from AI search results because AI is reading what other people say about you, not what you say about yourself.

The unified approach works. Level cut acquisition costs 18% and boosted SEO leads 22% by merging paid and organic for a B2B SaaS client. And we can use tools in our Level Intelligence Suite to connect performance signals across search surfaces. The channels compound each other. Treating them as separate line items on separate P&Ls leaves that compounding on the table.

Three audits you can run Monday morning

You don’t need a six-month transformation to start seeing the gaps. Three lenses, applied to your top 20 keywords, will show you where the opportunities and the waste are hiding.

Lens 1: Where do you actually appear? Check your organic rankings, paid ad coverage, and AI visibility across ChatGPT, Perplexity, and Gemini for the same set of keywords. Semrush has a free AI visibility checker. Most teams have never looked at all three surfaces side by side, and the gaps are almost always larger than they expect.

Lens 2: Where are you paying for traffic you already own? Cross-reference your number-one organic rankings with active PPC bids on the same terms. Start with branded keywords, where the waste is usually largest and the test is cleanest. If you rank first and you’re still bidding, you’re probably buying your own clicks.

Lens 3: Where is AI ignoring you? Compare your organic rankings with your AI citation presence. Only 11% of domains get cited by both ChatGPT and Perplexity, so strength in one guarantees nothing in the other. And check your robots.txt while you’re at it. If you’re blocking AI crawlers like OAI-SearchBot or PerplexityBot, you’ve pulled yourself off those shelves entirely.

This diagnostic shows you the full picture. What to do about it, the actual unification framework, is what I’m laying out at SMX Advanced.

The window won’t stay open

Generative Engine Optimization (GEO) keyword difficulty currently averages 15 to 20, compared to 45 to 60 for equivalent SEO terms. That gap will close. Once an LLM selects a trusted source, it reinforces that choice across related prompts. The brands getting cited now are training the models to keep citing them. Winner-takes-most dynamics are being baked into the weights.

Many companies are seeing search traffic drop significantly. Those same brands, the ones that get it right, are seeing the inverse when it comes to business growth. Rankings and revenue have decoupled. The brands that win from here are the ones that stopped measuring channels in isolation and started measuring the search experience their customers actually have.

We’re presenting a search unification framework at SMX Advanced in our session, “Organic, paid, and AI search: one strategy to rule them all.” If you want to stop optimizing for three separate channels and start compounding performance across every search surface, join us for the session or come find the Level team at Booth #9.

Remember: The search experience that existed in 2023 is gone. The strategy should be too.

Yesterday — 4 May 2026Search Engine Land

SMX Now: The automation drift and how to correct course

4 May 2026 at 21:19

Automation doesn’t fail on its own — it does exactly what it’s trained to do. The problem is that when Google Ads is fed incomplete, misaligned, or overly broad signals, it can optimize toward the wrong outcome faster than most advertisers realize.

In our second installment of SMX Now, our new monthly series, Ameet Khabra of Hop Skip Media will break down a real account where a 417% jump in conversions turned out to be the wrong kind of success. She’ll use that case study to explain the four key ways automation drift enters an account: signal drift, query drift, inventory drift, and creative drift.

You’ll leave with a practical framework for diagnosing drift early, understanding where human oversight matters most, and managing automation more deliberately so it works toward real business goals — not just platform-reported wins.

Join us May 6 at noon ET.

Save your spot

Google fixes Search Console’s year-long data logging issue – well, kind of…

4 May 2026 at 19:14
Screenshot of Google Search Console

Google said it has “resolved” an issue with logging data within Google Search Console reporting. The logging issue happened between May 13, 2025 through April 27, 2026, about 50 weeks. The resolution did not fix the past data, but it did fix the issue going forward.

What Google said. Here is what Google posted:

“A logging error prevented Search Console from accurately reporting impressions from May 13, 2025 until April 27, 2026. This issue has been resolved. As a result, you may notice a decrease in impressions in the Search Console Performance report. Only impressions and related metrics – CTR and average position – were affected; clicks were not affected by the error, and this issue affected data logging only.”

What was fixed. Just to be clear, Google has not fixed the data from May 13, 2025 through April 27, 2026 but just fixed the data going forward. So keep this in mind when reviewing the data in that date range.

John Mueller from Google confirmed on Bluesky that this is only fixed going forward and the old data will not be fixed.

Why we care. When reviewing your Search Console data, please note that for about 50 weeks, almost a year, the reports may be off and you may see a decrease in impressions, and thus click-through rate and average position data are also impacted.

Why brand authority beats topical authority in AI search

4 May 2026 at 18:00
Why brand authority beats topical authority in AI search

There’s a fundamental battle happening in search right now.

  • On one side is topical authority — the darling phrase of every SEO consultant who needs to sell more content.
  • On the other is brand authority — something marketers have talked about for decades, while much of search treated it as optional, vague, or something the brand team could handle after the sitemap was fixed.

Now AI has walked into the room, kicked over the furniture, eaten half the traffic, and exposed the real problem.

Search still matters. The global economy runs on people looking, comparing, buying, and solving problems through it. But the industry has a marketing problem.

And it shows. Too many SEOs have lost the plot on why people choose, remember, trust, search for, recommend, and buy from brands. AI search is making that ignorance harder to hide. That’s why brand authority wins — but not in the way most SEO dashboards suggest.

Topical authority was never supposed to mean content landfill

Before we get to AI, we need to define what topical authority was meant to be. At its best, it’s simple. 

You publish useful work, create evidence, and share expertise. Others cite you, journalists mention you, communities discuss you, and customers search for you. Over time, your brand becomes associated with the topic. That’s authority. It’s also brand building.

The problem is that much of the SEO industry hasn’t sold it that way. In practice, topical authority became a convenient commercial wrapper for content production.

SEO retainers were built around three pillars: technical, content, and links. Technical SEO became more specialized. Links were outsourced, packaged, renamed, earned through digital PR, or bought in one way or another. 

Content, meanwhile, remained the dependable agency engine — easy to sell, scope, and report. Think 4-8 blog posts a month, a topical map, a content hub, a cluster, a pillar page, and another 2,000 words on something nobody asked to read.

This wasn’t always wrong. In the pre-AI search world, content had real labor behind it. A decent article required research, writing, editing, optimization, internal linking, and promotion. That work had value. Good content could rank, attract links, build email lists, support commercial pages, and create some advertising effect through exposure.

Back in the day, we built what were often called power pages — strategic assets designed to earn links, rank, get shared, and pass equity to commercial pages. They had a purpose. They weren’t created just because the spreadsheet had another empty cell.

Topical authority changed that logic. It turned “let’s create something worth citing” into “let’s cover every possible keyword in the topic map and hope Google mistakes volume for expertise.” That was the original sin.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Authority is what others say about you

Authority isn’t created by what you publish on your own site. It’s created when you become a recognized source.

Former Google engineer Jun Wu described this in terms of “mention information” — how search engines analyze natural language, identify topic phrases and sources, cluster related terms, and map associations between sources and topics. 

In plain English, they can recognize when certain brands, people, domains, and entities are repeatedly mentioned in relation to specific topics.

Today, SEOs call that brand co-occurrence. The idea isn’t new. When authoritative sites, journalists, communities, reviewers, experts, and customers consistently mention your brand in relation to a topic, you become associated with it — not because you published hundreds of near-identical articles, but because the wider web treats you as relevant.

Topical coverage is what you say about yourself. Authority is what the market says about you. AI search makes that difference hard to ignore.

The smash burger test

Suppose you want to become an authority in the smash burger industry. You probably don’t, but some topical authority consultant calling themselves a “semantic SEO” is likely pitching it to a fast food brand right now.

An SEO version of topical authority would probably begin with a map:

  • What is a smash burger?
  • Best meat for smash burgers.
  • History of smash burgers.
  • Smash burger recipes.
  • Smash burger toppings.
  • Smash burger glossary.
  • Best smash burger restaurants.
  • How to make a smash burger at home.

There’s nothing inherently wrong with that. If you run a serious smash burger publication, restaurant group, food brand, or equipment business, some of it might be useful. But authority doesn’t come from publishing those pages.

Real authority looks different. You create original data on the fastest-growing smash burger chains. You publish an index of the best-rated smash burger restaurants in the U.S. and U.K. You interview chefs, test meat blends, and produce videos people actually watch. 

You become a source journalists use when covering the category. Food creators reference your data. Restaurant owners subscribe to your newsletter. People search for your brand plus “smash burger report.”

That’s topical authority. It’s also brand authority.

The thin SEO version is publishing thousands of keyword pages and internally linking them until your CMS starts begging for death. The real version is becoming known.

AI has broken the old content economics

The old commercial defense of topical authority was traffic.

Brands didn’t hire search marketers because they had a deep spiritual yearning to become encyclopedias. They hired them for organic revenue growth — to appear when customers searched, and to drive clicks, leads, and sales.

Informational content was sold, in part, as advertising. Someone searches a question, lands on your article, and sees your brand. Maybe they join your email list, return later, or buy.

That model was always more fragile than the industry admitted. Most users don’t sit around thinking about your B2B SaaS platform, your dog food brand, or your running shoe category page. 

Ask someone to name 10 toothpaste brands, and they’ll struggle, despite a lifetime of exposure. Ask them to recall the last ten TikToks they watched, and watch their face collapse.

Advertising works through memory structures, distinctive assets, repeated exposure, and relevance. A single accidental visit to a generic “what is” article was never the brand-building miracle some content marketers claimed.

Now AI has made the economics worse. For many informational searches, answers are increasingly synthesized before the click. From the user’s point of view, that’s often a better experience.

My dad is in his 70s. He loves AI Overviews. He doesn’t want to click through three ad-infested recipe pages, dodge newsletter popups, reject cookies, scroll past a life story, and finally find how long to boil an egg. He wants the answer.

Users aren’t mourning your lost organic session. They’re getting on with their lives. That’s the uncomfortable truth.

If the click disappears, much of the supposed advertising effect of informational content disappears with it — no logo exposure, no distinctive assets, no remarketing pixel, no email capture, and no carefully designed journey. Just your content absorbed into a synthesized answer, and maybe a small source link on the side.

Get the newsletter search marketers rely on.


AI citations aren’t the same as human citations

This brings us to another emerging industry obsession: AI citations. 

The small source boxes in ChatGPT, Gemini, Perplexity, AI Overviews, and other AI search experiences are being treated as the new holy metric. Agencies, tools, and consultants are already building around it.

The SEO industry loves a single metric — domain authority, traffic, keyword positions, share of voice, and now AI visibility. The problem is that an AI citation isn’t the same as a human citation.

An AI citation is often a helpful link — a reference, a retrieval artifact. It’s directionally useful. It can show what sources a system uses to support an answer, and whether your content is accessible, relevant, and being surfaced in certain contexts.

But it’s not the same as:

  • A journalist choosing to cite your research. 
  • A customer recommending you in a forum.
  • A creator reviewing your product.
  • A trade publication naming your brand as an expert source.

Human citations are evidence of market recognition. AI citations are evidence of machine retrieval. Don’t confuse the two.

The goal isn’t to be scraped. It’s to be recommended.

Brand search is the cleaner signal

If you want a better proxy for whether your authority is growing, look at brand search.

People search for brands they know, are considering, have bought from, or were recommended. Brand search isn’t perfect, but it’s much closer to commercial reality than counting how often a chatbot footnotes your blog post.

That’s why share of search matters. It gives you a directional view of market demand and mental availability. If more people are searching for your brand relative to competitors, something is happening. Your advertising, PR, product, reviews, word of mouth, content, partnerships, social presence, and customer experience are creating demand.

This is where the “but this is just SEO” crowd starts clearing its throat.

It’s not “just SEO.” Or rather, it’s only SEO if you define it so broadly that it includes every activity that might influence a search result. That’s strategic ambiguity. It lets everyone claim they were doing the future all along.

Most SEO retainers weren’t building brand fame. They were producing content, fixing technical issues, buying or earning links, and reporting rankings. Sometimes it worked — sometimes very well. But the average topical authority strategy wasn’t a sophisticated brand visibility program.

Traditional SEO still matters

None of this means you abandon traditional SEO. Buyer-intent rankings, category pages, product pages, local pages, technical SEO, internal linking, structured data, reviews, and crawlability matter. 

Search still works as a shelf. Many brands are discovered for the first time in supermarkets. The same is true in Google. If someone searches “emergency locksmith near me,” “best trail running shoes,” or “meeting intelligence software,” you want to appear.

Being found still matters, but it’s not the same as being recommended. Traditional SEO helps you get found, while brand authority drives recommendation. 

AI search shifts the balance toward the latter, synthesizing options, reducing uncertainty, and often naming brands, products, and solutions directly.

The new job is meaningful visibility

Semrush accidentally said the quiet part out loud with its April Fools’ “Brand Visibility Expert” stunt, where employees changed their titles on LinkedIn. It was a joke, but not entirely. 

The company later described AI visibility tools that track brand visibility, mentions, prompts, perception, and competitor presence in AI search. That’s where the market is going.

The future of search marketing isn’t just search engine optimization. It’s brand visibility across the network.

That means increasing meaningful visibility in the places where humans and AI systems encounter information: 

  • Search engines.
  • AI answers.
  • Review sites.
  • Communities.
  • YouTube. 
  • Reddit.
  • Trade media.
  • News sites.
  • Podcasts.
  • Influencers.
  • Comparison pages.
  • Customer reviews.
  • Social platforms.
  • Partner ecosystems.
  • Your own site.

The web is now the surface, and your website is just one part of it. This is the shift many SEOs don’t want to face. Many are used to optimizing owned pages for search engines. 

The next era is about optimizing a brand’s presence across the web. That requires different work.

Start with positioning

If you want to build brand authority in AI, start with positioning.

  • Who are you for?
  • What problem do you solve?
  • How do you solve it better?
  • What should the market associate with you?
  • What proof supports that claim?

These aren’t fluffy brand questions. They’re search questions now.

  • A locksmith isn’t only an emergency locksmith. They may install commercial locks, repair window locks, replace garage locks, secure doors, and provide security advice. 
  • A running shoe retailer may want to be known for trail running expertise, fast delivery, wide range, gait analysis, competitive pricing, or specialist advice. 
  • A SaaS platform may want to be known for extracting meeting intelligence that helps sales teams improve conversion.

These are performance attributes — the reasons people choose you. Your search strategy should reinforce them.

If your pet food brand specializes in sensitive stomachs, you need to be visible around dog dietary problems — not just on your blog, but in vet commentary, buyer guides, reviews, creator content, journalist coverage, customer stories, comparison pages, and data studies. 

These are the places where humans and AI systems learn what’s credible. That’s brand authority.

Create things worth being cited by humans

The rule for AI-era content is simple. Every piece of content should have real-world marketing value at publish.

If one person encounters it, they should understand your brand better, feel more positively about it, remember something useful, or be more likely to trust you.

If content only makes sense as an SEO asset after it ranks, it’s probably weak.

This means you stop creating “dead” content. Instead:

  • Create original research. 
  • Publish category data. 
  • Build useful tools. 
  • Share expert commentary. 
  • Produce strong product comparisons. 
  • Release reports journalists can cite. 
  • Create opinionated guides. 
  • Review products properly. 
  • Explain problems better than competitors. 
  • Make videos people want to watch. 
  • Turn internal data into public insight. 
  • Build assets that earn links and mentions.

Do fewer things. Make them better. Promote them harder.

Brands have limited budgets — smaller ones have even less room for waste. Spending thousands on a content library that repeats known information may be less effective than using the same budget to create one excellent data study, seed it with journalists, get creators talking, earn reviews, improve product pages, and run ads that make people search for your brand.

Ask yourself, “What use of this budget is most likely to increase brand search, links, mentions, reviews, and recommendations?”

Fitness times visibility equals success

A useful idea from network science applies here: success is driven by fitness multiplied by visibility.

  • Fitness is your ability to outperform alternatives — product, service, price, expertise, speed, range, design, convenience, proof, reviews, and customer experience.
  • Visibility is how often and how meaningfully the market encounters those signals.

Fitness without visibility is a brilliant brand nobody knows. Visibility without fitness is hype — and it usually collapses. 

That’s how preferential attachment starts. Brands that are talked about get talked about more. Brands that are searched get searched more. Brands that earn links earn more links. Brands that become default sources are cited more often. Brands that sell more get more reviews, more mentions, more data, and more presence.

AI accelerates this dynamic, consuming the web faster than humans and reinforcing those signals at scale. If your brand has dense, consistent, and credible associations with the problems you solve, you reduce uncertainty that you’re a good recommendation.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What actually wins in AI search

Brand authority wins in AI — because real topical authority was always brand authority.

The version of topical authority that deserves to survive is the one where a brand becomes a genuine source in its category — creating useful information, earning mentions, building demand, getting searched, getting cited, and becoming associated with the problems it solves.

The version that deserves to die is the one where a brand publishes endless keyword-targeted sludge and calls the result authority.

AI hasn’t killed SEO. It’s killed the illusion that mediocrity deserves traffic.

The search marketers who win next won’t be the ones who publish the most. They’ll be the ones who make brands more meaningfully visible across the internet. They’ll understand positioning, PR, content, technical SEO, reviews, creators, category demand, links, mentions, and brand search as one connected system.

The goal isn’t to optimize for search engines, but for the network they use to understand the world.

Build the brand. Make it visible. Make it worth recommending. Everything else is just content with delusions of grandeur.

7 tools for doing AEO right now

4 May 2026 at 17:00
Tools for doing AEO right now

The other day, I was putting together my version of a Lumascape of answer engine optimization (AEO) tools — I’m kidding, my computer doesn’t have that kind of bandwidth.

Instead of mapping every tool — which would be outdated in minutes — I’m focusing on the ones I actually use to grow clients’ AI search presence.

This is a deliberately short list: four tools I rely on, plus three I’m testing before adding them to my team’s stack.

1. AI assistants (ChatGPT, Claude, Perplexity)

Used thoughtfully, large language model (LLM) assistants are research and analysis tools in their own right. For AEO work specifically, they serve several distinct purposes: 

  • Competitive landscape research.
  • Content gap analysis.
  • Prompt testing.
  • Entity and topical coverage audits.
  • Structured content drafting. 

The key distinction from passive use is intentionality — using these tools with a defined AEO research methodology rather than ad hoc.

Why they’re essential

AEO requires a fundamental understanding of how AI systems process and represent information. The most direct way to develop that understanding is to work regularly and analytically within those systems. 

Querying AI assistants with the same prompts your target audience uses — and carefully analyzing what they return, what sources they cite, what entities they associate, and how they structure answers — gives you peerless ground-level intelligence.

Competitive strengths

Each platform has its own strengths worth noting:

  • ChatGPT is widely used and offers broad general knowledge synthesis, making it useful for understanding how mainstream AI handles queries in your category.
  • Claude tends toward more nuanced, caveated responses and is strong for analytical tasks.
  • Perplexity is citation-heavy by design and particularly valuable for AEO research precisely because it surfaces its sources explicitly. You can see in real time which domains are being pulled and why.

What you can’t do without them

Firsthand research on your brand’s current AEO status, which includes:

  • Manual prompt testing: See how your brand and content are being represented.
  • Competitive research: Query AI systems with category-level questions to see which competitors appear and how they are framed.
  • Topical gap analysis: Identify questions AI systems answer where your brand is absent.
  • Structural content analysis: Understanding the answer formats (lists, definitions, comparisons, how-tos) that AI systems prefer for your query types.

Caveats

AI assistant outputs are non-deterministic and vary by platform, model version, session context, and even time of day. Manual prompt testing is qualitative and difficult to scale. These tools are best used to build intuition and generate hypotheses, which should then be validated with quantitative data from platforms like Profound. 

Also worth noting: querying AI systems for competitive research can quickly become a rabbit hole, so before you truly dig in, build a structured testing framework and stick to it.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Profound 

Profound is purpose-built AEO intelligence that monitors how AI platforms (ChatGPT, Perplexity, Google AI Overviews, Claude, etc.) discover, surface, and cite your brand and content. 

It also tracks brand mention frequency and sentiment, competitors’ share of voice, and the specific prompts or query types that trigger your content to appear in AI-generated answers.

Why it’s essential

If you want to understand where your brand stands in the AI answer ecosystem, it’s currently the most direct way to get that data. It shifts the question from “where do we rank?” to “when AI answers a question in our category, are we in the answer?”

Competitive strengths

The cross-platform coverage is the tool’s most distinctive feature. Rather than measuring a single AI engine in isolation, it provides a comparative view across the major platforms simultaneously. The competitive benchmarking functionality is particularly useful: you can see both your own AI citation share and how it stacks up against named competitors. It’s the kind of context that transforms data into strategy.

What you can’t do without it

Some fundamental capabilities, like:

  • Quantifying your brand’s presence in AI-generated answers at scale.
  • Tracking citation share over time and across platforms.
  • Identifying which content types and topics drive AI mentions — and which competitors are winning the queries you’re losing.

It’s a pretty expensive tool. If you want to justify the expense to your C-suite, tell them, “This will show us exactly where we’re losing to {most hated competitor}.”

Caveats

The tool is evolving quickly, which it needs to do as the AEO landscape morphs in real time. The data it surfaces reflects AI outputs at the time the query is made. Outputs are inherently variable because AI systems don’t return the same answer to the same prompt every time.

Treat metrics as directional signals and trend data rather than precise, static rankings. It also won’t tell you why you’re being cited or not. That’s on you and your team to analyze.

3. Google Trends and Google Keyword Planner

Google Trends tracks the relative search interest for queries over time, across geographies, and in comparison to related terms. Google Keyword Planner provides search volume estimates and demand forecasting, originally designed for paid search planning but equally useful for organic and AEO strategy.

Why they’re essential

AEO strategy lives and dies by understanding demand signals. Before optimizing content to appear in AI answers, you need to know what questions people are actually asking, how that demand is trending, and whether the topic has enough volume to warrant investment. 

Google’s tools remain the most reliable source of this data at scale — and crucially, they reflect the same underlying search behavior that feeds into AI engine training data and query patterns.

Competitive strengths

Google Trends is uniquely powerful for directional trend analysis. It doesn’t give you absolute volume, but it gives you relative momentum — which is often more strategically valuable when you’re trying to anticipate where audience interest is heading rather than just where it has been.

For AEO specifically, rising query trends can signal emerging answer opportunities for you to address before your competitors do. 

In my experience, Keyword Planner’s forecasting features are underused. They can help you prioritize content investment based on projected demand rather than historical data alone.

What you can’t do without them

Build a truly dynamic AEO strategy in which you:

  • Understand whether demand for a topic is growing, stable, or declining before building content around it.
  • Identify seasonal patterns that should shape content publishing calendars.
  • Surface related queries and rising breakout terms that expand your AEO content coverage.
  • Validate whether a topic has enough search demand to justify the content investment.

Caveats

As you probably noticed when I recommended those tools, neither reflects AI-native query behavior directly. They measure traditional search, not prompts submitted to ChatGPT or Perplexity. 

As information-seeking behavior shifts toward AI interfaces, these tools will increasingly undercount true demand. Use them as a strong proxy and directional guide, not as a complete picture.

Worth noting: Keyword Planner also requires an active Google Ads account, and volume estimates in low-competition or niche categories can be imprecise.

Get the newsletter search marketers rely on.


4. Google Search Console and Google Analytics

Google Search Console (GSC) provides direct data on how your site performs in Google Search: which queries trigger impressions, click-through rates, average positions, and indexing status. 

Google Analytics 4 (GA4) tracks on-site behavior — how users arrive, what they do, how long they stay, and where they exit — including referral traffic sources that reveal whether visitors are arriving from AI-adjacent platforms.

Why they’re essential

For AEO practitioners, these tools serve critical diagnostic functions.

GSC tells you whether the content you’re optimizing for AI citation is also performing in traditional search, which matters because Google AI Overviews and traditional organic results draw from overlapping content pools.

GA4’s referral traffic data is increasingly important for detecting direct traffic from AI platforms: as users click through citations in tools like Perplexity or ChatGPT’s browsing mode, that activity shows up as referral or direct traffic. That’s worth segmenting and monitoring, even if, given the scorching rise of zero-click activity, it paints a very incomplete picture of your AEO impact.

Competitive strengths 

GSC’s query data is irreplaceable. No third-party tool has access to the same level of Google-sourced search performance data. The ability to see exactly which queries are driving impressions (even without clicks) is foundational for identifying content that has topical authority but may not be converting visibility into AI citations. 

GA4’s cross-channel attribution and audience analysis capabilities help you understand where AEO-driven traffic comes from and what that traffic does when it arrives — which is the commercial case for the discipline.

What you can’t do without them

Develop a true understanding of AEO business impact — and AEO blockers — by:

  • Measuring whether your AEO content investments translate into actual traffic and engagement.
  • Identifying content with high impression share but low CTR — a common signal of AI Overview cannibalization.
  • Monitoring referral traffic from AI platforms as that ecosystem matures.
  • Diagnosing indexing or crawlability issues that prevent AI systems from accessing your content.

Caveats

GSC data has well-documented limitations: it samples at scale, attribution can be murky, and data is typically available with a 48-72 hour lag. Critically, it only reflects Google. It tells you nothing about how you perform in Bing-powered AI search or standalone AI platforms. 

GA4 still has UX rough edges, so you’ll need to confirm that your event tracking and conversion configuration is solid before drawing strategic conclusions from the data.

Rapid-fire roundup 

That shortlist leaves, oh, thousands of tools left to consider. I recommend putting these on your radar and testing them to gauge their value as the AEO ecosystem develops.

5. AI Trust Signals

AI Trust Signals focuses on the credibility and trustworthiness signals that influence whether AI systems choose to cite a source.

This is an emerging and underexplored dimension of AEO: it goes upstream from content relevance and helps brands understand whether an AI system “trusts” a domain enough to surface it as an authoritative reference. It’s worth monitoring as the understanding of AI citation mechanics matures.

6. Ahrefs

Ahrefs is a mature SEO platform with deep backlink analysis, content gap tooling, site auditing, and keyword research capabilities. 

Its relevance to AEO is primarily indirect, but it’s significant: authority signals, including referring domain quality and topical authority depth, are widely believed to influence AI citation likelihood. Ahrefs is a benchmark tool for understanding and building that authority infrastructure.

Its Content Explorer is also a practical tool for identifying high-performing content in your category that AI systems are likely to draw from.

7. Roadway AI

Roadway AI positions itself as an AI-native platform with a focus on scaling growth marketing activities. Where it helps is building agents that can help attribute AEO signals into revenue, so you can better understand impact. 

As a newer entrant, it’s worth evaluating as part of a toolkit audit, especially if you’re looking for tooling built specifically for AEO use cases. The category is moving fast, and platforms like Roadway AI may gain significant mindshare within 12 months, which also means more competitors are coming soon.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The reality of AEO tools: Fast-moving and imperfect

AEO tooling is still catching up to AEO as a discipline, which will likely be the dynamic for the next few years, at least.

Everything is changing so fast, and AI-driven discovery is evolving as users adopt new behaviors that vary by vertical. What matters is consistently applied measurement, strong analysis, and testing that lead to actionable insights.

You won’t get your setup perfect. Like much of marketing, solidly directional is probably as good as you’re going to get. With any tool, if you can explain and measure how it improves your AEO efforts, that’s a great start. 

Before you sign any contracts, see if you can find an industry colleague with real-life experience using the tool, and ask them for their take. Unless they’re staunch advocates, chances are you can either find an alternative that does the same thing better or cheaper, or you can wait another month for one to emerge.

Why AI visibility starts before search and ends with citations

4 May 2026 at 16:00
Why AI visibility starts before search and ends with citations

The conversation has shifted. We’re spending less time optimizing for clicks and more time trying to fix the AI ROI story. AI now sits at the center of discovery, shaping what gets seen, summarized, and cited.

Here’s what’s working right now, what your peers are doing, and why SMX Advanced will feel different this year.

The SparkToro wake-up call: Influence happens everywhere

The foundation of any serious 2026 content strategy has to start with Rand Fishkin’s landmark March 2026 study, “Influence Happens Everywhere,” an analysis of the 5,000 most-visited sites on both mobile and desktop.

The finding that rattled the industry: while Google still commands 73% of search traffic, search itself is merely a response to influence created elsewhere.

People don’t wake up and search for a brand in a vacuum. They read, watch, and listen across a fragmented web of news, social media, and niche communities before they ever hit a search bar.

AI tools, despite their rapid growth, still account for a fraction of total web visits compared to the “big incumbents.” But the trajectory is unmistakable.

The fundamental problem with attribution in 2026 is that search gets over-credited because it captures demand at the finish line, while the fragmented channels — email, news, specialized content — get under-credited for creating that demand in the first place. 

When creating content, your job is to win the influence phase so thoroughly that when a user eventually turns to an AI assistant or a search bar, your brand is the only logical answer.

That framing is the strategic backbone behind sessions at the upcoming SMX Advanced in Boston, June 3-5, and the lens through which your entire editorial calendar should be rewritten.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What your Search Engine Land colleagues are already doing

Before we discuss tactics, it’s worth pausing to note that this publication’s own contributor base has been sounding the alarm in complementary ways. Read them together and a clear picture emerges.

Dave Davies, principal SEO manager at Weights & Biases and a regular SMX Advanced speaker, published a rigorous piece in December 2025, “Mentions, citations, and clicks: Your 2026 content strategy.” 

Drawing on Siege Media’s two-year content performance study covering more than 7.2 million sessions, Grow and Convert’s conversion research, and Seer Interactive’s AI Overview findings, Davies made the case that the metrics we’ve lived by — impressions, sessions, CTR — “no longer tell the full story.” 

Mentions, citations, and structured visibility signals, he argued, are becoming the new levers of trust and the path to revenue.

Carolyn Shelby, who appeared in a recent SMX Munich 2026 recap for her session “Inside Google’s Head,” crystallized what many of us have only half-articulated: AI doesn’t discover new brands — it selects from known entities. 

The implications are stark. If you haven’t built entity recognition across the web’s key reference points — Wikipedia, Reddit, LinkedIn, authoritative press coverage — you don’t get selected. 

My own October 2025 piece for this publication compared how ChatGPT, Perplexity, Gemini, Claude, and DeepSeek differ in their data sources, live web use, and citation rules. The conclusion I reached then is truer today: a single-platform AI strategy isn’t a strategy. Each model has different retrieval logic, different trust signals, and different recency weighting. 

Jordan Koene made the same point in January 2026, noting that different LLMs win different jobs. This heterogeneity is the fundamental reason why “write good content” is both correct and insufficient as advice.

What ‘full-stack content’ actually means

In 2024, we were impressed if an AI tool could write a decent 500-word blog post. Today, writing is the least interesting thing AI does.

Jasper’s 2026 Enterprise Suite is a useful illustration. It doesn’t just draft text, it:

  • Pulls real-time performance data from Google Search Console.
  • Identifies content gaps where competitors are gaining ground.
  • Generates a multimodal package: a 1,500-word deep dive, three vertical videos for YouTube Shorts, and custom infographics, all calibrated to a brand-voice model trained on your last five years of successful campaigns.

We have moved from “Help me write this” to “Help me dominate this topic.”

But tools don’t solve strategy problems. The harder question is “what should the content actually say?” AI can’t produce the original research, the proprietary case study, or the hard-won perspective that makes an LLM choose you over a dozen lookalike alternatives.

This is why the most interesting SMX Advanced session on content this year may be the one by Purna Virji of LinkedIn, who opens the conference with a keynote on fixing the broken AI ROI story before budgets get cut

Her argument — that AI investment must generate measurable business outcomes “at the P&L level,” not just activity, efficiency, or content volume — is a direct challenge to teams that have been celebrating output metrics while their revenue dashboards flatline.

Google Vids and the democratization of video: A genuine inflection point

Perhaps the most significant platform shift for content creators in 2026 was Google moving Google Vids out of its Workspace-only silo. You can now create, edit, and share videos at no cost directly within the Google ecosystem, powered by the Veo 3 generative model.

For years, video production was protected by a high barrier to entry: expensive tools, specialist skills, and days of editing time. Google Vids collapses that barrier. Drop a Google Doc or a URL into the “Help me create” prompt, and you get a full-motion storyboard with AI-generated voiceovers, licensed music, and transitions in minutes.

The practical consequences are arriving fast:

  • Small agencies are now producing video-first content calendars that previously required five-figure budgets. The “if only we had video” excuse has expired.
  • Hyper-localization is becoming a baseline expectation. Using Vids’ automated dubbing and visual swapping, a single “hero” video can be localized for 20 different markets in an afternoon.
  • AI-generated summaries are already threatening video metadata. YouTube recently tested swapping video titles for AI-generated summaries. Brands that have not invested in clear entity signals and structured descriptions may soon find their video content renamed by an algorithm — not a person.

The strategic implication is the same as it was for text: AI tools lower the floor but raise the bar. Every competitor now has access to cheap video. But who has something worth saying in that video?

GEO, AEO, and the language problem

Depending on which Search Engine Land article you read in the past few weeks, the dominant framework for surviving this shift is either generative engine optimization (GEO) or answer engine optimization (AEO).

A growing number of contributors argue these terms are marketing noise for what is, at bottom, just good search everywhere optimization plus structured data plus earned media.

That debate is genuinely worth having, and it will be had at SMX Advanced. But for the practitioner who needs to make decisions next week, here’s what the evidence actually supports:

  • eMarketer’s Nate Elliott put it plainly in a recent FAQ: “Almost every GEO response is different from every other GEO response.” Between 40% and 60% of cited sources change month-to-month across Google AI Mode and ChatGPT, making AI visibility far less stable than organic search rankings. That volatility is the real risk, not the terminology debate.
  • Similarweb’s 2026 GenAI Brand Visibility Index, reported by Digiday, found that major publishers like Reuters and The Guardian receive less than 1% of referral traffic from AI platforms despite being frequently cited. Yet, The Washington Post found that visitors arriving from AI platforms convert to subscriptions at four to five times the rate of traditional search visitors. The volume-versus-value tension has never been more acute.

The practical translation of all of this:

  • In 2006, we optimized press releases for keyword density: In 2026, optimize for entity association: linking your brand to specific solutions in the AI’s knowledge graph.
  • Long-form blogs become modular content: Snippets, FAQs, and data tables designed for “chunk-level” ingestion by fetcher bots.
  • Gated white papers become open data: Making unique research crawlable so AI credits you as the source in an overview, not a competitor who summarized your findings.
  • Your robots.txt file now has strategic consequences: Allowing OAI-SearchBot but blocking GPTBot is a choice — one that determines whether you show up in real-time AI search citations versus model training data.

Get the newsletter search marketers rely on.


The human premium isn’t a platitude

As AI-generated content reaches its peak volume, the value of the human voice has skyrocketed — but not for the reasons most think-piece writers suggest.

The standard argument runs like this: 

  • Audiences can smell AI slop.
  • Authentic human writing wins. 

That’s partially true, but it understates the mechanism. The deeper reason human-authored content is winning in AI-mediated search is structural. 

Human authors who’ve built genuine reputations across years of bylined, cited, and cross-referenced work have, in effect, built entity graphs that AI systems can navigate. That isn’t something a prompt can replicate.

The classic example: an AI-generated 2026 review of a new electric vehicle might be factually flawless, listing every spec and battery range. But it loses to a human-authored piece that says, “I drove this through a New England blizzard and the door handle froze shut.” 

AI can’t freeze. It can’t feel frustration. It can’t have a bad morning. Those human frictions are now genuinely valuable SEO assets — not because they’re charming, but because no language model can fabricate them with any credibility.

Readers, trained by years of exposure to AI content, have developed a reliable instinct for the difference.

The Siege Media data Davies cited adds a quantitative dimension: across 7.2 million sessions, the content that earned sustained citations and conversions shared a consistent profile — original data, expert voice, and clear structure that an AI system could extract and attribute. Volume without those properties is, as the headline puts it, just noise.

What to watch at SMX Advanced 2026 — and what it tells us about where this is going

The SMX Advanced agenda is the clearest available signal of where the practitioner community thinks the critical problems are right now. A few sessions deserve particular attention from anyone focused on content creation.

Virji’s keynote, “Your AI ROI story is broken: How to fix it before budgets get cut,” opens Day 2. Virji isn’t arguing that AI investment is wrong. She’s arguing that almost every organization is measuring it incorrectly — and that the correction required is organizational, not tactical.

Davies’ session, “Predicting and influencing AI citations with retrieval signals,” on June 4, is the direct technical counterpart to the strategic framing above. If Virji is asking “what does success mean,” Davies is asking “how do you engineer it.” 

SMX Master Classes ran in April, and SMX Next follows in November. If there’s a throughline across the entire 2026 SMX calendar, it’s this: the search marketing community has collectively decided that the era of isolated channel optimization is over. Content, paid, technical, and brand are now one discipline, or they are failing disciplines.

What you need to actually do in the second half of 2026

Broad strategic advice is easy to nod at and ignore. Here is the specific and uncomfortable version:

  • Audit your AI visibility before you touch your content: Query ChatGPT, Claude, Copilot, Gemini, and Perplexity with the prompts your customers actually use. Note which brands appear. Note which sources get cited. If you’re not among them, adding more content isn’t the first fix — fixing your entity signals is.
  • Stop treating your unique research as a lead-generation gate: Crawlable, citable original data earns AI attribution. A PDF behind a form wall earns nothing except a diminishing number of direct downloads as discovery migrates to AI interfaces.
  • Invest in community platforms as a first-party strategy, not an afterthought: LLMs pull heavily from Reddit, YouTube, and Wikipedia. eMarketer’s Max Willens has noted that Reddit alone has 100 million daily active users generating brand conversations. Your brand’s absence from those conversations isn’t neutral. It creates a vacuum that your competitors or your critics will fill.
  • Optimize for citatability, not just rankability: The new KPI isn’t the visit — it’s the attribution. If an AI Overview uses your data but doesn’t name your brand, you’ve been mined, not cited. Use clear entity markup, structured FAQ sections, and “quotable” conclusions that make it easy for an LLM to attribute rather than anonymize.
  • Diversify your robots.txt strategy intentionally: Different bots serve different purposes. Allowing OAI-SearchBot (real-time citation) while blocking GPTBot (model training) is a legitimate strategic choice. Most organizations have not made it deliberately. Make it deliberately.
  • Measure differently: The eMarketer-recommended framework allocates 40% of your optimization budget to core SEO fundamentals, 25% to digital PR, 20% to data and reporting, 10% to training, and 5% to experimentation. If your current allocation looks nothing like that, the gap explains more about your AI visibility struggles than any content audit will. So, combining SEO and PR is even more important today than it was back in the old days when I started speaking and writing about search.
See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The bots are crawling: Are you worth citing?

The age of the proxy is over. You can no longer hide behind a ghostwriter or a simple prompt and expect to build a brand. But the deeper truth — the one that doesn’t make it into most AI content trend pieces — is that this transformation benefits people who’ve been doing the hard work all along.

If you’ve been building genuine expertise, publishing original data, earning bylines in authoritative publications, and cultivating real presence in the communities where your customers actually talk — then you already have most of what you need. The AI infrastructure of 2026 is, in many ways, a system that rewards exactly the things good content has always required.

The difference is that the competition is now generating plausible-sounding content on a scale that would have been impossible to imagine four years ago. Being good isn’t enough to stand out. 

You have to be citable, structured, and present in all the right places at precisely the right time — which is a harder, more interesting, and ultimately more durable strategic problem than keyword density ever was.

See you in Boston.

Ask.com shuts down after over 25 years

4 May 2026 at 00:27

Ask.com, formerly Ask Jeeves, which launched 29 years ago on June 3, 1996, before Google launched, shut down on May 1, 2026.

Ask.com now has a turn down page that reads:

Every great search
must come to an end.
As IAC continues to sharpen its focus, we have made the decision to discontinue our search business, which includes Ask.com. After 25 years of answering the world’s questions, Ask.com officially closed on May 1, 2026.

“To the millions who asked…”
We are deeply grateful to the brilliant engineers, designers, and teams who built and supported Ask over the decades. And to you—the millions of users who turned to us for answers in a rapidly changing world—thank you for your endless curiosity, your loyalty, and your trust.

Jeeves’ spirit endures.

Ask.com has been known as an answer engine, the Jeeves butler was who you spoke to in the early days. With AI and all these new answer engines, Ask.com could have deployed its own unique taste for its answer engine. But I guess with all the competition and the market being harder, IAC, Ask.com’s parent company, decided to turn it down.

Ask.com will always have a place in the search marketing industry and Ask, including Jeeves, will be missed.

Before yesterdaySearch Engine Land

Microsoft Ads adds deeper reporting to Performance Max placements

1 May 2026 at 22:36

Microsoft Advertising is expanding its Performance Max reporting with publisher-level conversion and spend data — giving advertisers more visibility into where results are actually coming from

What’s happening. According to Microsoft Ads Product liaison Navah Hopkins, the PMax Website Publisher URL report now includes conversion and spend metrics, moving beyond basic placement visibility into actionable performance data.

This gives advertisers clearer insight into which placements are driving real outcomes — not just impressions or clicks.

Why we care. This update gives advertisers visibility into which placements are actually driving conversions and spend — not just impressions. That means better optimisation decisions, from scaling winning inventory to cutting wasted spend. It also makes it easier to trust and justify Performance Max performance with concrete data, rather than relying on aggregated reporting.

How advertisers can use it. The update opens up several practical use cases. High-performing placements can now inform Audience Ads strategies, such as building remarketing campaigns or impression-based audiences from winning inventory.

At the same time, advertisers can identify poor-fit placements and exclude them using account-level URL exclusion lists, helping protect brand safety and improve efficiency.

Between the lines. This is another step toward making automated campaigns more transparent. Rather than replacing control entirely, platforms are starting to give advertisers clearer signals on what’s working — and where to act.

What to watch:

  • Whether this level of transparency expands further across PMax reporting
  • How advertisers balance automation with manual optimisation
  • If similar reporting features roll out across other platforms

Bottom line. With conversion and spend data now visible at the placement level, Microsoft is making Performance Max a little less of a black box — and a lot more actionable.

Google Ads API v20 sunset set for June 10

1 May 2026 at 20:34
6 mistakes that hurt ecommerce campaigns on Google Ads

Google is enforcing a hard cutoff for older API versions, meaning advertisers and developers who don’t upgrade risk losing access to critical campaign management tools.

What’s happening. Google Ads API v20 will officially sunset on June 10, 2026. From that date onward, all requests to v20 will fail, requiring migration to a newer version to maintain uninterrupted API access.

Why we care. If you rely on the Google Ads API and don’t upgrade in time, automated workflows — including reporting, bidding and campaign management — could suddenly stop working. This could lead to data gaps, performance issues and operational disruption. Migrating early ensures continuity and avoids last-minute fixes that can impact campaign performance.

What to do. Google is urging users to upgrade as soon as possible and provides resources like release notes and upgrade guides to support the transition. Developers can also use the Google Cloud Console to review recent API activity, including which methods and versions their projects are calling.

Between the lines. API sunsets are routine, but the impact can be significant for advertisers relying on custom scripts, tools or third-party platforms. Missing the deadline could disrupt reporting, bidding or campaign automation workflows.

The bottom line. This is a firm deadline with real consequences: upgrade to a newer Google Ads API version before June 10 or risk losing access entirely.

How to build SEO agent skills that actually work

1 May 2026 at 18:00
SEO agent skills

I’ve built 10+ SEO agent skills in 34 days. Six worked on the first try. The other four taught me everything I’m about to show you about the folder structure most LinkedIn posts about AI SEO skills gloss over.

What makes these agents reliable isn’t better prompts. It’s the architecture behind them. Here’s how to build an agent from scratch, test it, fix it, and ship it with confidence.

Why most AI SEO skills fail

Here’s what a typical “AI SEO prompt” looks like on LinkedIn:

You are an SEO expert. Analyze the following website and provide a comprehensive audit with recommendations.

That’s it. One prompt. Maybe some formatting instructions. The person posts a screenshot of the output, gets 500 likes, and moves on. The output looks professional. It reads well. It’s also 40% wrong.

I know because I tried this exact approach. Early in the build, I pointed an agent at a website and said, “find SEO issues.” It came back with 20 findings. Eight didn’t exist. The agent had never visited some of the URLs it was reporting on.

Three problems kill single-prompt skills:

  • No tools: The agent has no way to actually check the website. It’s working from training data and guessing. When you ask, “Does this site have canonical tags?” the agent imagines what the site probably looks like rather than fetching the HTML and parsing it.
  • No verification: Nobody checks if the output is true. The agent says, “missing meta descriptions on 15 pages.” Which 15? Are those pages even indexed? Are they noindexed on purpose? No one asks. No one verifies.
  • No memory: Run the same skill twice, you get different output. Different structure. Different severity labels. Sometimes different findings entirely. There’s no consistency because there’s no template, no schema, no record of past runs.

If your skill is a prompt in a single file, you don’t have a skill. You have a coin flip.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Build SEO agent skills as workspaces

Every agent in our system has a workspace. Think of it like a new hire’s desk, stocked with everything they need. Here’s what the workspace looks like for the agent that crawls websites and maps their architecture:

agent-workspace/
  AGENTS.md          instructions, rules, output format
  SOUL.md            personality, principles, quality bar
  scripts/
    crawl_site.js    tool the agent calls to crawl
    parse_sitemap.sh tool to read XML sitemaps
  references/
    criteria.md      what counts as an issue vs noise
    gotchas.md       known false positives to watch for
  memory/
    runs.log         past execution history
  templates/
    output.md        expected output structure

Six components. One prompt file would cover maybe 20% of this.

AGENTS.md is the instruction manual 

I wrote thousands of words of methodology into AGENTS.md.  Instead of “crawl the site,” I laid out the steps: “Start with the sitemap. If no sitemap exists, check /sitemap.xml, /sitemap_index.xml, and robots.txt for sitemap references. 

Respect crawl-delay. Use a browser user-agent string, never a bare request. If you get 403s, note the pattern and try with different headers before reporting it as a block.”

Scripts are the agent’s tools

The agent calls node crawl_site.js –url to analyze website data. It doesn’t write curl commands from scratch every time. That’s the difference between giving someone a toolbox and telling them to forge their own wrench.

References are the judgment calls

This contains criteria for what counts as an issue. Known false positives to watch for. Edge cases that took me 20 years to learn. The agent reads these when it encounters something ambiguous.

Memory is institutional knowledge

Here I keep a log of past runs:

  • What it found last time. 
  • How long the crawl took. 
  • What broke. 

The next execution benefits from the last.

Templates enforce consistency 

This is where I get specific about the output I want: “Use this exact structure. These exact fields. This severity scale.” Output templates are the difference between getting the same quality in run 14 as you did in run 1.

Walkthrough: Building the crawler from scratch

Let me show you exactly how I built the crawler. It maps a site’s architecture, discovers every page, and reports what it finds.

Version 1: The naive approach

I provided the instruction: “Crawl this website and list all pages.”

The agent wrote its own HTTP requests, used bare curl, and got blocked by the first site it touched. Every modern CDN blocks requests without a browser user-agent string, so it was dead on arrival.

Version 2: Added a script

I built crawl_site.js using Playwright. This version used a headless browser and a real user-agent. The agent calls the script instead of writing its own requests.

This worked on small sites, but it crashed on anything over 200 pages. Because there was no rate limiting and no resume capability, it hammered servers until they blocked us.

Version 3: Introducing rate limiting and resume

I added throttling with a two requests per second default and never every two seconds for CDN-protected sites. The agent reads robots.txt and adjusts its speed without asking permission. I also added checkpoint files so a crashed crawl can resume from where it stopped.

This worked on most sites, but it failed on sites that require JavaScript rendering.

Version 4: JavaSript rendering

This time, I added a browser rendering mode. The agent detects whether a site is a single-page app (React, Next.js, Angular) and automatically switches to full browser rendering.

It also compares rendered HTML against source HTML, and I found real issues this way: Sites where the source HTML was an empty shell but the rendered page was full of content. Google might or might not render it properly. Now we check both.

This version worked on everything, but the output was inconsistent between runs.

Version 5: Time for templates and memory

For this version, I added templates/output.md with exact fields: URL count, sitemap coverage, blocked paths, response code distribution, render mode used, and issues found. This way every run produces the same structure.

I also added memory/runs.log. The agent appends a summary after every execution. Next time it runs, it reads the log and can compare results, like “Last crawl found 485 pages. This crawl found 487. Two new pages added.”

Version 5 is what we run today. Five iterations in one day of building.

THE CRAWLER'S EVOLUTION

  v1: Raw curl           → blocked everywhere
  v2: Playwright script  → crashed on large sites
  v3: Rate limiting      → couldn't handle JS sites
  v4: Browser rendering  → inconsistent output
  v5: Templates + memory → stable, consistent, reliable

  Time: 1 day. Lesson: the first version never works.

The pattern is always the same: Start small, hit a wall, fix the wall, hit the next wall.

Five versions in one day doesn’t mean five failures. It means five lessons that are now permanently encoded. I’ve rebuilt delivery systems four times over 20 years. The process doesn’t change. You start with what’s elegant, then reality hits, and you end up with what works.

Tip: Don’t try to build the perfect skill on the first attempt. Build the simplest thing that could possibly work. Run it on real data and watch it fail. The failures tell you exactly what to add next. Every version of our crawler was a direct response to a specific failure. Not a feature we imagined. A problem we hit.

Get the newsletter search marketers rely on.


Equip agents with the right tools

This is the most important architectural decision I made.

When you write “use curl to fetch the sitemap” in your instructions, the agent generates a curl command from scratch every time. Sometimes it adds the right headers. Sometimes it doesn’t. Sometimes it follows redirects. Sometimes it forgets.

When you give the agent a script called parse_sitemap.sh, it calls the script. The script always has the right headers, always follows redirects, and always handles edge cases. The agent’s judgment goes into WHEN to call the tool and WHAT to do with the results. The tool handles HOW.

Our agents have tools for everything:

  • crawl_site.js: Playwright-based crawler with rate limiting, resume, and rendering
  • parse_sitemap.sh: Fetches and parses XML sitemaps, counts URLs, detects nested indexes
  • check_status.sh: Tests HTTP response codes with proper user-agent strings
  • extract_links.sh: Pulls internal and external links from page HTML

The agent decides which tools to use and what parameters to set. The crawler chooses its own crawl speed based on what it encounters.  It reads robots.txt and adjusts. It has judgment within guardrails.

Think of it this way: You give a new hire a CRM, not instructions on how to build a database. The tools are the CRM. The instructions are the process for using them.

Progressive disclosure: Don’t dump everything at once

Here’s a mistake I made early: I put everything in AGENTS.md. Every rule. Every edge case. Every gotcha. Thousands of words.

The agent got confused. It had too much context and it started prioritizing obscure edge cases over common tasks. It would spend time checking for hash routing issues on a WordPress blog.

The fix: progressive disclosure.

Core rules that affect the 80% case go in AGENTS.md. This is what the agent needs to know for every single run.

Edge cases go in references/gotchas.md. The agent reads this file when it encounters something ambiguous. Not before every task. Only when it needs it.

Criteria for severity scoring go in references/criteria.md. The agent checks this when it finds an issue and needs to decide how bad it is. Not upfront.

This is the same way a skilled employee operates. They know the core process by heart. They check the handbook when something weird comes up. They don’t re-read the entire handbook before answering every email.

If your agent output is inconsistent but your instructions are detailed, the problem is usually too much context. Agents, like new hires, perform better with clear priorities and a reference shelf than with a 50-page manual they have to digest before every task.

The 10 gotchas: Failure modes that will burn you

Every one of these lessons cost me hours. They’re now encoded in our agents’ references/gotchas.md files so they can’t happen again.

Agents hallucinate data they can’t verify 

I asked the research agent to find law firms and count their attorneys. It made every number up. It had never visited any of their websites.

Only ask agents to produce data they can actually fetch and verify. Separate what they know (training data) from what they can prove (fetched data).

Knowledge doesn’t transfer between agents

This fix I figured out on day one (use a browser user-agent string to avoid CDN blocks) had to be re-taught to every new agent. Day 34, a brand new agent hit the exact same problem.

Agents don’t share memories. Encode shared lessons in a common gotchas file that multiple agents can reference.

Output format drifts between runs

The same prompt can result in different field names: “note” vs. “assessment.” “lead_score” vs. “qualification_rating.” If you run it twice, get two different schemas.

The fix: Create strict output templates with exact field names. Not “write a report.” “Use this exact template with these exact fields.”

Agents confidently report issues that don’t exist

The first three audits delivered false positives with total confidence.

The fix wasn’t a better prompt. It was a better boss. A dedicated reviewer agent whose only job is to verify everyone else’s work. The same reason code review exists for human developers.

Bare HTTP requests get blocked everywhere

Every modern CDN blocks requests without a browser user-agent string. The crawler learned this on audit number two when an entire site returned 403s.

All it required was a one-line fix, and now it’s in the gotchas file. Every new agent reads it on day one.

Don’t guess URL paths

Agents love to construct URLs they think should exist: /about-us, /blog, /contact. Half the time, those URLs 404.

My rule is: Fetch the homepage first, read the navigation, follow real links. Never guess.

‘Done’ vs. ‘in review’ matters 

Agents marked tasks as “done” when posting their findings. Wrong. “Done” means approved. “In review” means waiting for human verification.

This small distinction has a huge impact on workflow clarity when you have 10 agents posting work simultaneously.

Categories must be hyper-specific

“Fintech” is useless for prospecting because it’s too broad. “PI law firms in Houston” works. Every company in a category should directly compete with every other company.

My first attempt at sales categories was “Personal finance & fintech.” A crypto exchange doesn’t compete with a budgeting app. Lesson learned in 20 minutes.

Never ask an LLM to compile data

Unless you want fabricated results. I asked an agent to summarize findings from five separate reports into one document. It invented findings that weren’t in any of the source reports.

Always build data compilations programmatically. Script it. Never prompt it.

Agents will try things you never planned

The research agent tried to call an API we never set up. It assumed we had access because it knew the API existed.

The fix: Be explicit about what tools are available. If a script doesn’t exist in the scripts folder, the agent can’t use it. Boundaries prevent creative failures.

Build the reviewer first

This is counterintuitive. When you’re excited about building, you want to build the workers. The crawler. The analyzers. The fun parts.

Build the reviewer first. Without a review layer, you have no way to measure quality. You ship the first audit and it looks great. But 40% of the findings are wrong. You don’t know that until a client or a colleague spots it.

Our review agent reads every finding from every specialist agent. It checks:

  • Does the evidence support the claim?
  • Is the severity appropriate for the actual impact?
  • Are there duplicates across different specialists?
  • Did the agent check what it says it checked?

That single agent was the biggest quality improvement I made. Bigger than any prompt tweak. Bigger than any new tool.

The human approval rate across 270 internal linking recommendations: 99.6%. That number exists because a reviewer verifies every single one.

I’ve seen the same pattern with human SEO teams for 20 years. The teams that produce great work aren’t the ones with the best analysts. They’re the ones with the best review process. The analysis is table stakes. The review is the product.

BUILD ORDER (WHAT I LEARNED THE HARD WAY)

  What I did first:     Build workers → Ship output → Discover quality problems → Build reviewer
  What I should have done: Build reviewer → Build workers → Ship reviewed output → Iterate both

  The reviewer defines quality. Build it first. Everything else gets measured against it.

Tip: If you’re building multiple agents, the reviewer should be the first agent you build. Define what “good output” looks like before you build the thing that produces output. Otherwise, you’re shipping hallucinations with formatting. I learned this across three audits that were embarrassing in hindsight.

The validation standard (Our unfair advantage)

The reviewer catches technical errors. But there’s a higher bar than “technically correct.”

We have a real SEO agency with real clients and a team with 50 years of combined experience. Every agent finding gets validated against one question: “Would we stake our reputation on this?”

Would we actually send this to a client, put our name on the report, and tell the developer to build it?

Below are four tests we use for every finding:

  • The Google engineer test: If this client’s cousin works at Google, would they read this finding and nod? Would they say, “Yes, this is a real issue, this makes sense”? If the answer is no, it doesn’t ship.
  • The developer test: Can a developer reproduce this without asking a single follow-up question? “Fix your canonicals” fails. “Change CANONICAL_BASE_URL from http to https in your production .env” passes.
  • The agency reputation test: Would we defend this finding in a client meeting? If I’d be embarrassed explaining it to a technical CMO, it gets cut.
  • The implementation test: Is this specific enough to actually fix? Not “improve your page speed” but “your hero video is 3.4MB, which is 72% of total page weight. Serve a compressed version to mobile. Here’s the file.”

This is our unfair advantage. We’re not building agents in a vacuum. Most people building AI SEO tools have never run a real audit. They don’t know what “good” looks like. We do. We’ve been delivering it for 20 years with real clients. That’s why our approval rate is 99.6%.

Sandbox testing: Train on planted bugs

You don’t train an agent on real client sites. You build a test environment where you KNOW the answers. We built two sandbox websites with SEO issues we planted on purpose:

  • A WordPress-style site with 27+ planted issues: missing canonicals, redirect chains, orphan pages, duplicate content, broken schema markup.
  • A Node.js site simulating React/Next.js/Angular patterns with ~90 planted issues: empty SPA shells, hash routing, stale cached pages, hydration mismatches, cloaking.

The training loop:

  • Run agent against sandbox.
  • Compare agent’s findings to known issues.
  • Agent missed something? Fix the instructions.
  • Agent reported a false positive? Add it to gotchas.md.
  • Re-run. Compare again.
  • Only when it passes the sandbox consistently does it touch real data.

Think of it like a driving test course. Every accident on real roads becomes a new obstacle on the course. New drivers face every known challenge before they hit the highway.

The sandbox is a living test suite. Every verified issue from a real audit gets baked back in. It only gets harder. The agents only get better.

Consistency: The unsexy secret

Nobody writes about this because it’s boring. But consistency is what separates a demo from a product.

Three things that make output consistent:

  • Templates: Every agent has an output template in templates/output.md: Exact fields, structure, and severity scale. If the output looks different every run, you don’t need a better prompt. You need a template file.
  • Run logs: After every execution, the agent appends a summary to memory/runs.log. Timestamp, site, pages crawled, issues found, duration. The next run reads this log. It knows what happened last time. It can compare and provide outputs like, “Found 14 issues last run. Found 16 this run. 2 new issues identified.”
  • Schema enforcement: Field names are locked: “severity” not “priority,” “url” not “page_url,” “description” not “summary.” When you let field names drift, downstream tooling breaks. Templates solve this permanently.

If your agent output looks different every run, you need a template file, not a better prompt. I cannot stress this enough. The single fastest way to improve quality for any agent is a strict output template.

The stack that makes it work

A quick note on infrastructure, because the tools matter.

Our agents run on OpenClaw. It’s the runtime that handles wake-ups, sessions, memory, and tool routing. Think of it as the operating system the agents run on. When an agent finishes one task and needs to pick up the next, OpenClaw handles that transition. When an agent needs to remember what it did last session, OpenClaw provides that memory.

Paperclip is the company OS. Org charts, goals, issue tracking, task assignments. It’s where agents coordinate. When the crawler finishes mapping a site and needs to hand off to the specialist agents, Paperclip manages that handoff through its issue system. Agents create tasks for each other. Auto-wake on assignment.

Claude Code is the builder. Every script, every agent instruction file, every tool was built with Claude Code running Opus 4.6. I’m a vibe coder with 20 years of SEO expertise and zero traditional programming training. Claude Code turns domain knowledge into working software.

The combination: OpenClaw runs the agents. Paperclip coordinates them. Claude Code builds everything.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The result

This process resulted in 14+ audits completed with 12 to 20 developer-ready tickets per audit, including exact URLs and fix instructions. All produced in hours, not weeks.

We have a 99.6% approval rate on internal linking recommendations on 270 links across two sites, verified by a dedicated review process. 

We completed more than 80 SEO checks mapped across seven specialist agents. Each check has expected outcomes, evidence requirements, and false positive rules. Every finding is specific (i.e., “the main app JavaScript bundle is 78% unused. Here are the exact files to fix”).

That level of specificity comes from the skill architecture. The folder structure. The tools. The references. The templates. The review layer. Not the prompt.

If you want to build SEO agent skills that actually work, stop writing prompts and start building workspaces. Give your agents tools, not instructions. Test on sandboxes, not clients.

Build the reviewer first. Enforce templates. Log everything. The first version will fail. The fifth version will surprise you.

This is how you turn agent output into something repeatable. The same system produces the same quality — whether it’s the first audit or the 14th — because every step is structured, verified, and encoded.

Not because the AI is smarter. Because the architecture is.

Performance Max for B2B: 5 best practices

1 May 2026 at 17:45
Performance Max for B2B- 4 best practices

Over the past few years, Performance Max has gone from an opaque experiment to a more capable — though still imperfect — campaign type for B2B marketers.

The fundamentals haven’t changed: skepticism still matters, first-party data is critical, experimentation is non-negotiable, and actionable reporting drives optimization. What has changed is how much better Google has gotten at operationalizing those inputs.

That means your Performance Max strategy needs to adapt. Here are five best practices for running more effective PMax campaigns for B2B today.

1. Guide AI with the right inputs

In 2022, given the automated nature of PMax campaigns and the aggressive way Google reps were pushing them, I predicted we’d see an accelerated move toward AI integration. That’s certainly played out, probably in part because of competitive pressures introduced by ChatGPT and the like. 

AI Max for Search (launched in 2025) and PMax are both being prioritized by Google, and that’s not necessarily a bad thing since Google hasn’t deprecated standard Search campaign for B2B and has provided a slew of helpful updates that make PMax more viable for B2B. 

Three updates worth using include: 

  • Search themes, which are useful for more precise targeting.
  • Brand exclusions, which help minimize CPC inflation and over-investment on less-incremental queries.
  • Account-level channel reporting, which gives you a single dashboard look at performance across campaigns. For this feature, segment by conversion metrics to drill down on ROI by channel. You’ll quickly see overperformers where you can increase investment and underperformers that cry out for further optimization or reduced budget.  
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Address persistent lead quality issues

B2B lead quality in search campaigns has always been a challenge, and PMax’s relative lack of advertiser control makes that challenge tougher. I’ve pushed offline conversion tracking (OCT) since we’ve had that capability, but it’s an absolute non-negotiable for B2B campaigns.

Along with OCT, leverage a relatively new functionality, enhanced conversions for leads, and work around the edges by incorporating reCAPTCHA and testing other mechanisms to reduce PMax spam leads.

Dig deeper: The parts of Performance Max you can actually control

3. Build stronger audience signals

Citing the phase-out of third-party cookies that still hasn’t happened (!), Google officially sunsetted Similar Audiences in 2023, which — well, it was a big loss for advertisers.

To compensate, understand and adapt according to the nature of PMax targeting, which is based on audience signals. Feed the AI high-quality first-party data (CRM lists) and let the algorithm find “lookalikes” through its own internal signals.

CRM lists for B2B are obviously critical, and this should give you even more incentive to clean up and segment CRM data, with audience lists closest to the point of revenue (e.g., SQLs or revenue if you don’t have enough closed-won data to send strong signals), especially valuable for finding high-value new users.

Get the newsletter search marketers rely on.


4. Make creative a performance lever

Creative is an important part of the puzzle for PMax. Good creative can prompt the right audience to engage, and great creative can deter the wrong audience from engaging.

Because YouTube is now a massive part of PMax campaigns, video — which has never been a B2B strength — should be prioritized more than ever for performance marketing.

Google has made this easier by adding the ability to build AI-generated assets right in the Google Ads interface. Just recently, they launched an important complementary feature in beta: PMax A/B creative testing to help advertisers understand which creatives are actually driving performance, and to use test-and-control structures to surface winning (and losing) elements.

Dig deeper: Is Google Ads Asset Studio a game changer? Not so fast

5. Use reporting to drive decisions

A major source of frustration with PMax has been a lack of transparency into results. Over the last few years, Google has introduced reporting updates to address some of those concerns.

Search term insights and auction insights in the Insights tab provide more visibility into performance. Search term insights show how your ads perform for the queries users actually type, including how those ads are being matched and served. This added nuance makes optimization more precise.

Auction insights add competitive context, showing how your campaigns perform against others in the same auctions through metrics like impression share and outranking share.

Finally, asset-level reporting brings visibility to creative performance, with data on impressions, clicks, cost, and conversions for each asset.

Together, these updates give you a clearer view into what’s driving performance — and where to focus optimization efforts.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Make Performance Max work for you

Taken together, recent updates make PMax more viable for B2B marketers than it used to be, especially for those with strong first-party data to train bidding algorithms and a need to find new customer pockets.

After more than 10 years in marketing, I still prefer having controllable levers — and I’m not willing to fully trust Google to act more in my (or my clients’) best interests than its own. Use everything at your disposal to make PMax campaigns work for you, and keep an eye out for new features Google releases that can give you more visibility and control over your account performance.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

A blueprint for semantic programmatic SEO

1 May 2026 at 16:00
A blueprint for semantic programmatic SEO

Programmatic SEO (pSEO) has been viewed with suspicion by the market. For many SEOs, the term is synonymous with low-quality pages, duplicate content, and the old tactic of “find and replace” city names in static templates.

Google’s spam policies on scaled content abuse are clear: generating vast amounts of unoriginal content primarily to manipulate search rankings is a violation.

Modern pSEO replaces mass page generation with an infrastructure that answers thousands of specific search intents with local nuance and semantic depth at a scale that isn’t possible manually.

This blueprint shows how to evolve from syntax-based pSEO (swapping keywords) to semantics-based pSEO (meaning and context), using a methodology we’ve applied to major players in Brazil.

The fallacy of the static template vs. semantic granularity

The most common mistake when starting a pSEO project is starting with the template, not the data. The old mindset said: “I have a template for ‘Best Hotel in [City].’ I’ll replicate this for 500 cities.”

The problem? The search intent for “Best Hotel in [Las Vegas]” (focused on nightlife, casinos, and luxury) can be radically different from the intent for “Best Hotel in [Orlando]” (focused on family suites, park shuttles, and pools). The user priorities, amenities sought, and decision-making criteria change completely.

The semantic approach requires us to use AI to granularize content. Instead of just swapping the {{City}} variable, we use LLMs to rewrite entire sections of the page based on the specific travel intent of that destination.

We don’t want to create 1,000 pages that say the same thing. We want 1,000 pages that answer 1,000 unique travel needs while maintaining a scalable technical structure.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Strategy before scale: The authority map

Before writing a single line of content, you must answer a critical question: Where do I have permission to rank?

Many pSEO projects fail because they try to cover topics where the domain lacks historical authority. The solution we developed involves a deep analysis of topic clusters based on real Google Search Console (GSC) data, not just third-party search volume.

The authority map methodology works in three stages:

  • Cluster audit: Identify which topics the domain already dominates, which are opportunities, and where semantic gaps exist.
  • Priority definition: pSEO should be used surgically to fill these gaps and strengthen topical authority, not to shoot in all directions.
  • Connection with the calendar: The pSEO strategy must be born from this data. If GSC shows you have growing authority in a topic like “Mortgage Credit,” that is where scale should be applied first.

From there, AI suggests themes and direction, taking into account seasonality and brand guide specifications. This approach transforms pSEO from a “gamble” into a tactic of territorial defense and expansion based on proprietary data.

Solving ‘brand hallucination’: Context governance

The biggest barrier to AI adoption in enterprise companies is brand consistency. How do you ensure that 500 AI-generated articles don’t sound generic or, even worse, hallucinate information outside the company’s tone of voice?

The answer lies in context governance. Instead of relying on isolated prompts, the pSEO architecture must include a brand guidelines layer that acts as a guardian before text generation. This means systematically injecting:

  • Brand persona: (e.g., “We are technical, but accessible”).
  • Negative constraints: (e.g., “Never use the word ‘cheap,’ use ‘affordable’”).
  • Proprietary data: Institutional information that AI doesn’t have in its training data.

By centralizing these guidelines in a digital brand guide that feeds all AI agents, we ensure that multiple sites within the same corporate group (such as a retail conglomerate) maintain their distinct verbal identities, even when producing content on the same topic (like Black Friday) simultaneously. 

The AI stops being a “junior copywriter” and starts acting as a specialist trained in the company’s culture.

Get the newsletter search marketers rely on.


The architecture: The semantic mesh (internal linking)

You’ve created 1,000 excellent pages. How do you ensure Google finds and values all of them? The answer isn’t using “related posts” plugins that only look for matching tags. You need to create a strategy based on real data.

The end of the ‘dead end’

You don’t want the user to land on a page and leave. You want to offer the next logical step. Cross-reference search intent with the destination:

  • The practical example: If a user lands on the site searching for “What is a CRM,” they are in the discovery phase. If that page doesn’t link semantically to “Advantages of [your company’s] CRM,” the user journey “dies” there. The semantic mesh connects the question to the solution.

Strategic reasoning in practice

Instead of randomness, our analysis works based on semantic meaning. The AI identifies: 

  • “I noticed you are about to write about ‘customer retention.’ We have an older article about ‘churn rate’ that complements this topic perfectly. Insert a link to it.”

The tool suggests links between these pages because the context is relevant, strengthening the site’s Topical Mesh.

In programmatic SEO projects, where site depth can grow rapidly, this automation via vectors is the only way to ensure no good page gets forgotten at the bottom of the index.

This closes the loop of topical authority, ensuring no page generated at scale becomes an orphan page.

Case study: Regionalization and seasonality at scale

Theory is nice, but seeing it in practice is even better. Let’s analyze the case of Ânima Educação, one of the largest private education players in Brazil, with about 310,000 students and 18 higher education institutions.

The challenge

The National High School Exam (ENEM) is the “Black Friday” of Brazilian education. Search volume explodes in a short period, competition is brutal, and search intents shift rapidly (from “how to study” to “what is my score good for”). Furthermore, Brazil has continental dimensions; the questions of a student in the Northeast are different from those of a student in the extreme South.

The execution

Using the semantic pSEO methodology and the brand governance mentioned above, it was possible to structure complete coverage of the candidate journey — from exam preparation to the release of grades. 

We ensured that all 18 brands were positioned to answer student questions at the exact moment of the search, respecting local nuances.

The results

  • Scale with precision: During five months, hundreds of undergraduate course pages and articles were optimized or created with granular local relevance.
  • Business impact: Surpassed the organic revenue target by 110% during the critical ENEM season.
  • Omnichannel dominance: Visibility across Google Search, Google Discover, and AI Overviews, and LLMs like Gemini and ChatGPT.
  • Strategic shift: The SEO team transitioned from repetitive manual tasks to high-level strategic oversight.

The technical guardian: Conversational monitoring

Scaling content without scaling technical monitoring is a recipe for disaster. Publishing 500 pages that result in 404 errors, redirect loops, or poor Core Web Vitals (CWV) can destroy the site’s crawl budget.

Modern pSEO requires a layer of real-time technical SEO. It isn’t enough to wait for the monthly report. You need to connect data to the workflow. 

The trend now is the use of technical SEO agents — conversational interfaces that allow the professional to ask the data: “Of the 200 pages published today, which ones have indexing issues?” or “Which clusters are suffering from high LCP?”

This closes the cycle:

  • Planning (authority map).
  • Execution (pSEO with brand governance and semantic linking).
  • Monitoring (technical agent).
See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Putting semantic pSEO into practice

Programmatic SEO has ceased to be about volume to become about relevance. Success won’t come from publishing 10,000 pages tomorrow, but from building an infrastructure that delivers genuine value at scale.

You can use this semantic pSEO roadmap to start your transformation:

  • Start with data, not templates: Use your authority map (GSC) to identify where you already have permission to grow. Don’t waste resources attacking territories where your brand has no history.
  • Implement context governance: Before scaling, create the “rules of the game.” Inject your brand guidelines and proprietary data into prompts to avoid generic content and hallucinations. The AI should sound like your best expert.
  • Build bridges, not islands: Ensure every new page is integrated into a robust semantic mesh. Use internal linking to transfer authority and guide the user toward conversion, avoiding dead ends.
  • Monitor with AI: Abandon sporadic manual audits. Adopt technical agents that monitor your site’s health in real time as you scale.

The future of SEO isn’t about who creates the most content. It’s about who can unite the scale of the machine with the sensitivity of the human to deliver the best answer, at the right moment, for each individual user.

Inside ChatGPT ads: What the data tells us and what’s coming next by Adthena

1 May 2026 at 15:00

The trial is live, limited to the U.S. for now, and moving faster than you likely expected. ChatGPT ads launched Feb. 9 for logged-in users on Free and Go tiers, with 600+ advertisers already in. 

With 800 million weekly active users, a global rollout of ChatGPT ads is a matter of when, not if. 

OpenAI has confirmed the next expansion to Australia, New Zealand, and Canada. The latest update from Adthena trialists suggests the UK could see ads as early as mid-May.

We’ve tracked ChatGPT ad placements since rollout. With an index of 50,000+ daily placements across B2B software, ecommerce, fintech, and consumer verticals, we’ve had a front-row view of how this format is evolving. Here’s what we’ve found.

What ChatGPT ads actually look like

ChatGPT ads appear inline within conversation responses. When you ask something with commercial intent like “best weekend getaway” or “top running shoes under $100,” a sponsored result can appear alongside the AI’s answer, clearly labeled “Sponsored.”

This isn’t a search bar. It’s a conversation. Users arrive already engaged, already researching, often close to a decision. 

The format is tighter than traditional search: no sitelinks or extensions — just a headline, short body copy, and a destination.

But here’s what we didn’t expect. Our data shows what we’re calling the Adthena “Double Parked” phenomenon: a single brand appearing twice in the same response.

We spotted New Balance with two separate sponsored placements in one ChatGPT answer. This raises a key question around visibility, frequency, and what it means to own a conversation on this platform.

10 things we’ve learned from 50,000+ daily placements

If you move fast, this is a rare moment: a new format, an uncontested landscape, and data most competitors don’t have yet. Here’s what it shows.

  1. Headlines follow a “Brand: Benefit” formula. A name, a colon, a value claim. Think “Betterment: 5.25% APY Cash Account.” Dominant across top performers.
  2. Almost every ad leads with the brand name. Awareness thinking for a format where users are already deep in a conversation, not just entering a search bar.
  3. Headlines average just 30 characters, with a ceiling around 36. The constraint forces hyper-concise messaging and every word earns its place.
  4. Body copy runs around 19 words, structured as two tight sentences. One lead proof point, one offer or nudge. One reason to click.
  5. Context mirroring is a defining feature. The strongest ads echo the user’s query directly. A running shoe ad referencing “the transition from 5k to 21.1k” isn’t a coincidence.
  6. The $ symbol drives conversion. Specific dollar figures, precise APY rates, credit amounts. Concrete claims consistently outperform vague promises in intent-heavy environments.
  1. Numbers dominate body copy. Specs, trial lengths, rates. Hard numbers feel more native and trustworthy than soft superlatives in a research-led environment.
  2. “Free” is the most common conversion lever. It removes friction for users already in research mode and close to a decision.
  1. CTAs are action-specific and generic “Learn More” is virtually absent. “Open Account,” “Shop Cell Phones,” “Claim Credits.” Every CTA names the brand, offer, or next step.
  1. Tone is confident and measured. Exclamation marks are rare. The best ads mirror ChatGPT’s calm register—hype punctuation kills trust here.

What this means for your paid search strategy

Top-performing brands in ChatGPT don’t repurpose Google ad copy and hope for the best. They write for a conversational, intent-rich environment where users are already halfway through a decision before the ad appears.

Lead with your brand name. Anchor value in specifics. Make low-friction offers central to your creative. If you’re not thinking about context mirroring, you’re leaving performance on the table.

The bigger question is visibility. If your competitors show up in ChatGPT conversations and you don’t, you’re not just missing clicks — you’re missing the conversation.

See exactly what’s happening with Adthena’s ChatGPT Ads Intelligence

Knowing the trends is one thing. Knowing what your competitors are doing on your exact prompts is another. That’s the problem we set out to solve.

Right now, ChatGPT ads give you impressions and clicks — nothing more. No competitive context, no prompt-level visibility, no insight into who else appears in the same conversations or where you’re missing coverage. You’re optimizing blind.

Adthena’s ChatGPT Ads Intelligence changes that. Here’s what you get.

Your performance, in context

The Ads Performance tab gives you a live snapshot of your ChatGPT activity: ad presence rate, top-performing intent group, total impressions, average CTR, and unique competitors detected. The trend chart shows your presence over time so you can clearly see whether you’re gaining or losing momentum.

Know which topics you’re winning and where to close the gap

The Topics and Keywords Analysis view breaks down performance by intent group, showing your ad presence rate against the competitor average. Each group includes a built-in tactical recommendation, so you always know your next move.

See your own ads as users see them

The Ads Sampling tab shows all your ChatGPT creatives with their headline, description, image, and format. The insight panel highlights your top-performing creative and surfaces optimization opportunities, like pairing a price anchor with a time-limited offer.

Understand exactly what competitors are running

The Competitor Creative Analysis panel breaks down rival ads across your tracked prompts: the images they use, the dominant copy themes, and their format mix. No more guessing what your competition is doing.

Never miss a shift in the competitive landscape

The Ads Benchmarking tab shows who’s advertising on your prompts and how their presence changes week to week. The “What changed this week?” feed flags new entrants and share shifts in plain language before your next campaign review.

Find the gaps before your competitors do

The Competitor Gap Analysis table shows every prompt where competitors have presence and you don’t, flagged by intent group and competitor count. A clear, prioritized view of where to expand your ChatGPT coverage.

The first prompt is the new first click

We’re tracking early-stage data from a platform still in limited rollout. As OpenAI expands to new countries and the advertiser base grows, the competitive landscape will shift fast. Brands building their ChatGPT presence now — learning the format, testing creative, mapping competitive gaps — will have a meaningful head start over those who wait.

Don’t let competitors win the first prompt. Join the product waitlist to uncover your ChatGPT ads landscape. 

In the meantime, get your ads ready with Adthena’s free ChatGPT AdBridge. Connect your Google Ads account and we’ll build your ChatGPT ads setup with AI-enriched campaigns and smarter negative keywords — delivered to your inbox, ready to import.

Google Analytics introduces Task Assistant

30 April 2026 at 21:41

Google is trying to simplify one of its most complex products, helping advertisers and analysts get more value from Google Analytics without deep technical expertise.

What’s new. Google Analytics is rolling out Task Assistant, a guided workflow tool that surfaces tailored recommendations to improve property setup, data collection and reporting.

How it works. Available in the left-hand navigation, Task Assistant organizes recommendations into clear categories like connecting accounts, enhancing reporting and fixing data issues. Users can mark tasks as complete as they go or skip items that don’t align with their business goals, creating a more flexible setup experience.

Why we care. Google is making it easier to identify gaps in tracking and fix them quickly, which leads to more reliable data and better decision-making. Task Assistant helps ensure Analytics is properly configured without requiring deep expertise, reducing the risk of missed insights or inaccurate reporting. Ultimately, better data setup means more confident optimization of campaigns and budgets.

Between the lines. Analytics platforms are powerful but often underutilized due to poor configuration. Task Assistant is Google’s attempt to reduce that friction by turning setup into a step-by-step process rather than a manual audit.

The bottom line. Task Assistant aims to make Google Analytics more actionable, guiding users toward better data quality and more effective measurement with less guesswork.

Google Ads adds “Association” metric to Brand Lift Studies

30 April 2026 at 19:35
In Google Ads automation, everything is a signal in 2026

Google is filling a key measurement gap between awareness and consideration, giving advertisers a clearer view of how their brand is actually perceived — not just remembered.

What’s new. Google Ads has introduced a new “Association” metric within Brand Lift Studies. Advertisers can define a concept, category or attribute, and Google will ask users a survey-style question: which brands they associate with that specific idea.

How it works. Instead of measuring simple recall, the metric evaluates whether audiences connect your brand to a desired positioning. That could mean “premium,” “sustainable,” or even a product category — offering a more nuanced read on brand perception.

Why we care. Google is giving you a way to measure brand positioning, not just awareness or recall. The new Association metric helps determine whether campaigns are actually shaping how consumers perceive a brand — a critical step between being known and being chosen. It also enables more strategic optimization of creative and messaging, especially for brands trying to own specific attributes or categories.

Between the lines. Brand Lift has traditionally focused on awareness, recall and consideration. Association sits in between, helping advertisers understand whether their messaging is shaping how people think about the brand, not just whether they recognize it.

The catch. There’s still a constraint: advertisers can only select three Brand Lift metrics per study, so adding Association means making trade-offs with existing KPIs.

The bottom line. Association gives advertisers a more strategic lens on brand building — measuring not just visibility, but whether campaigns are landing the intended message.

First seen. This update was first spotted by Google Ads expert, Thomas Eccel who shared the update on LinkedIn.

Reddit marketing for SaaS: Insights from 117 brands

30 April 2026 at 19:00
Reddit marketing

Reddit is quickly becoming a powerful platform shaping how people discover and perceive brands. As AI search engines increasingly surface Reddit threads and comments, these conversations now influence visibility.

To understand this shift, I analyzed 117 SaaS brands on Reddit. People reveal what they really think there, which doesn’t always match polished marketing.

As communities shape brand perception, Reddit is no longer optional.

Here’s my analysis, plus how you can use Reddit to your advantage.

How I analyzed 117 SaaS brands: The methodology

My analysis of 117 brands across the SaaS industry started with identifying the verticals to address:

  • Project management and productivity (15 brands)
  • Customer relationship management (CRM) (10 brands)
  • Marketing automation (14 brands)
  • SEO and marketing intelligence (8 brands)
  • Design and creative (8 brands)
  • Development and software development and IT operations (DevOps) (12 brands)
  • AI (12 brands)
  • Customer support and engagement (10 brands)
  • Analytics and data (10 brands)
  • Sales and revenue (8 brands)
  • Collaboration and communication (10 brands)

From there, I created a Google Spreadsheet with the brand names for each vertical. Then, I mapped out the following details for each brand:

  • Link: A direct link to the brand’s subreddit.
  • Brand subreddit: When the brand’s subreddit was created, the number of weekly visitors and the number of weekly contributors.
  • Subreddit features: The number of moderators and whether they were branded moderators.
  • Topics: Common topics in the subreddit, including tips, use cases, compliments, criticisms, and subscription cost.

Across all 117 brands, I analyzed over 300 Reddit threads, including brand mentions, sentiment, community engagement, and brand participation. 

Let’s dive into the key findings.

1. Reddit rewards authentic brands

One thing became clear early on: people respond to people, not corporate brands.

Brands run by moderators who were helpful, honest, and non-promotional were received more favorably than those using a polished, corporate tone. Redditors tended to ignore or downvote obvious marketing copy.

In general, redditors don’t want to be marketed to. They want real opinions and real experiences.

As a result, peer recommendations felt more credible than brand messaging. When redditors asked questions or shared frustrations, the most authentic answers came from other users.

When brands stepped in with scripted or promotional responses, they often struggled to gain traction.

However, when brands answered directly, acknowledged limitations, and used conversational language, responses improved. In some cases, brand moderators even earned upvotes and thanks.

2. Brands not on Reddit are missing out

Redditors talk about brands, whether or not they’re present on the platform. In many cases, brands simply aren’t there.

Thirty of the 117 brands I analyzed have no Reddit presence. Another 23 are on Reddit, but their subreddits are abandoned.

In several instances, users asked direct questions like: 

  • “Anyone here used this?”
  • “What should I use instead of X?
  • “Best alternative to X?”

They received responses from other redditors sharing experiences, opinions, recommendations, and problems.

When brands aren’t there, the conversation continues without them. Over time, their reputation on Reddit exists outside the brand’s control.

Other negative outcomes can follow. When brands aren’t present, others can take their place.

In one instance, I found a community using a popular brand name that had nothing to do with the brand. This shows how easily brand presence can be shaped or misrepresented.

DM if you want to buy this Community!

Redditors are already discussing your brand. The only question is whether you’re part of that conversation.

3. Reddit is a customer research goldmine

Reddit is an incredible source of unfiltered customer insights.

If you want to know what drives people away, what people value, and how people compare tools, you’ll find the answers on Reddit.

Here are some ways Reddit helps with customer research.

Reddit captures feedback that traditional methods miss

On Reddit, you’ll find people asking questions and sharing:

  • Onboarding struggles.
  • Integration challenges.
  • Complaints about mobile usability.
  • Frustrations with AI features.
  • Confusion around updates.
  • Users building alternative tools.

Reddit users tend to say exactly what they think. This kind of honesty is hard to find anywhere else.

These insights are critical for improving SaaS products. Traditional feedback methods don’t always capture these comments — but Reddit does.

Reddit supports brand advocates

Your Reddit community is a good place for happy customers to advocate for your brand. For example, this Reddit post by Monday shares a brand ambassador program.

In the comments, some brand advocates share insights into their experience, helping elevate the post.

Some brands have self-sustaining Reddit communities

When discussing some community-led brands, redditors often highlight solutions to problems and help fill brand gaps. For example, I noticed users helped each other with troubleshooting, sharing fixes, and recommending integrations.

In some cases, these communities were almost fully self-sustaining, requiring little brand involvement.

Redditors highlight preferred competitor features and pricing frustrations

Across the topics I reviewed, redditors often expressed negative sentiment about pricing and suggested alternatives, especially for enterprise SaaS tools.

As a result, SaaS brands are often associated with soaring costs and limited pricing transparency, which can hurt perception. When users highlight competitor features, they surface gaps and alternative tools to consider.

Redditors share their actual use cases

Reddit attracts people who discuss how they use software. In my analysis, I observed that users shared:

  • Workflows
  • Screenshots and builds
  • Tutorials and guides

These posts and comments give brands insight into real use cases they can use to improve products.

Reddit is essential for brand visibility and perception

Reddit is no longer a side conversation. It’s where brand perception is shaped in real time.

Across the 117 brands I analyzed, conversations are happening on Reddit — even when the brand isn’t present. Increasingly, those conversations feed into AI search, influencing what people see, trust, and choose.

Smart brands shouldn’t ignore Reddit. They should track mentions, listen closely, show up where it matters, and treat Reddit as both a reputation channel and a product insight engine.

Google Preferred Sources now works for all languages

30 April 2026 at 18:06

Google’s Preferred Sources now supports all languages, not just the English language. “Preferred Sources is now rolling out globally in all supported languages,” Google wrote on its blog this morning.

“This feature gives you more control over the news you see on Search by letting you choose the outlets and sites you want to appear more often in Top Stories,” Google added.

In December, Google rolled out preferred sources globally but it only supported English. Now it supports all languages globally as well.

Stats. Google added some interesting data including:

  • “Readers are twice as likely to click through to a site after marking it as a Preferred Source”
  • “People have already selected over 200,000 unique sites — from niche local blogs to global news desks”

Preferred Sources. Preferred Sources let searchers star publications in the Top Stories section of Google Search, and Google uses that signal to show more stories from those starred outlets. The feature entered beta in June, rolled out in the U.S. and India in August, and is now expanding globally.

How it works. You click the star icon to the right of the Top Stories header in search results. After that, you can choose your preferred sources – assuming the site is publishing fresh content.

Google will then start to show you more of the latest updates from your selected sites in Top Stories “when they have new articles or posts that are relevant to your search,” Google added.

More details can be found over here.

Why we care. Traffic from Google Search is hard and if you can get your readers, loyal readers, to make your site a preferred source, that can help. Google said those users are twice as likely to click, which can help drive more traffic.

So add the preferred source icon to your site and encourage users to sign up. You can make Search Engine Land a preferred source by clicking here.

From paid clicks to answer equity: Your new 2026 search strategy

30 April 2026 at 18:00
Atomic sandwich

The difference between a 2% margin and a 20% margin increasingly comes down to whether you’re renting attention or owning the answer.

For years, search rewarded the ability to buy visibility. That model is weakening.

As AI systems increasingly resolve queries without a click, the value shifts from traffic acquisition to answer formation.

When you move from buying clicks to engineering answers (i.e., structuring content so it can be surfaced, cited, and trusted by AI systems), you change what you own. Instead of renting placement, you build answer equity: durable inclusion in the outputs that shape decisions.

The goal isn’t to turn off paid search. It’s to stop relying on it as your primary source of demand. Over time, this can lower acquisition costs and reduce volatility, because you’re not competing for every impression.

An atomic sandwich

To operationalize this shift, you need a content structure that maximizes what AI systems can extract. Think of it as an “atomic sandwich.”

An atomic sandwich content structure shifts the focus from chasing traffic to maximizing intent density. Here’s how:

The atomic fact (top bun)

Most organizations treat their search budget like a high-interest payday loan.

You keep pouring cash into the paid bucket for that immediate hit of traffic, and it feels like you’re winning.

But the moment you stop feeding the meter, your brand disappears.

The forensic proof (the meat)

For many organizations, this isn’t just marketing inefficiency — it’s an organizational risk.

In the emerging Answer Economy, your rented audience is evaporating. Data from Seer Interactive (Sept 2025) shows paid CTR on informational queries has dropped 68% when Google’s AI Overviews are present.

You’re not just paying for clicks. In many cases, your paid traffic contributes to awareness that AI systems can later satisfy without requiring a click.

The structural directive (bottom bun)

The “box” has changed.

Here’s the structural leak in your balance sheet: to survive 2026, you must stop buying a crowd and start engineering the answer.

If your brand isn’t among the trusted sources behind the machine’s answer, your visibility — and influence — shrinks significantly.

The new “box”: From librarian to forensic auditor

We’ve moved from a search engine that directs users to a generative engine that validates information. Every dollar you spend on ads to cover a lack of E-E-A-T is money you’re burning.

The data is clear: appearing in search results is no longer a viable model on its own.

  • The organic collapse: A SISTRIX (March 2026) analysis found that when an AI Overview is present, position 1 CTR drops from 27% to 11% — a 59% decline.
  • The global impact: Ahrefs (Dec 2025) found AI Overviews correlate with a 58% lower average CTR for the top-ranking page.

The goal is no longer just to rank in search, but to be consistently included among the sources AI systems rely on.

Without trust, you’re paying for ghost impressions.

In the old box, you could survive by being loud. In the new box, you survive by being certain.

The search addiction cycle (why your org can’t quit)

Most companies are in organizational denial.

You see the cost of rented clicks rising and quality falling, but you’re too afraid to stop because you’ve neglected your information architecture and have no foundation. That’s a balance sheet liability.

  • Stage 1 — the vanity hit: early paid search wins made you feel like a genius. You mistook traffic volume for business health.
  • Stage 2 tolerance building: As the Answer Economy evolved, keywords got more expensive. Instead of fixing structural integrity, you upped the dose.
  • Stage 3 — the context-debt overdose: You’re paying for zombie facts — content an AI can summarize in seconds. Zero-click searches have surged to 69%. Your expensive awareness is consumed for free by AI.
  • Stage 4 — total dependency: Your marketing manager becomes a budget operator rather than a builder of durable demand. They aren’t building answer equity; they’re managing cash transfer to Google.

The forensic intervention: The 7-point organizational health check

Use this checklist in your next review to find where your Answer Equity is leaking.

  • The Information Gain test: Ask Gemini to summarize your page. If it adds nothing beyond common results, you’re in violation of Google’s Information Gain patent. You have a zombie fact with zero value.
  • The entity audit: Does your brand have a verified Google Knowledge Graph ID? Without it, you’re not an asset — you’re just text.
  • Source of ground truth: Are you cited in AI Overviews? BrightEdge (Sept 2025) shows that without a citation, your visibility is effectively zero.
  • The faucet test: If you cut PPC spend by 20%, does lead volume drop 20%? If so, you have no foundation — you’re renting revenue.
  • Schema and provenance: Are you using Schema.org/Person to link experts to your brand? Unverified content is untrusted noise to a retriever.
  • The “meat” ratio: Review your top 10 posts. Do they include primary research? If not, they’re fodder for the AI’s top bun with no reason to click.
  • Machine-readable graph adoption: Is your team moving toward W3C RDF-star (RDF 1.2) or ISO/IEC GQL standards? These are the 2026 blueprints for verifying Answer Equity.

The recovery plan: From rented clicks to owned authority

1. Purge the zombie facts (the information gain protocol)

Stop rewarding word count. Every piece of content must deliver a “meat” layer — information gain a retriever can’t synthesize from the rest of the web. That’s how you reclaim your margins.

Dig deeper: Information gain in SEO: What it is and why it matters.

2. Build your “E-E-A-T engine” (the trust infrastructure)

Stop treating schema as a technical extra. It’s your trust score on the digital exchange. Ensure your authors have strong provenance so AI retrievers can instantly crawl and confirm your expertise.

Dig deeper: Decoding Google’s E-E-A-T: A comprehensive guide to quality assessment signals.

3. Measure ‘intent density’ (the scoreboard shift)

If your traffic drops but lead quality holds, you’re winning. Focus on users who bypass the summary because they need the deep, forensic expertise only you provide.

Dig deeper: Measuring zero-click search: Visibility-first SEO for AI results.

The final shift: Building your answer equity

The shift from renting an audience to owning the answer is the most significant strategic pivot your organization will make this decade. It moves you from a marketing expense to a balance sheet asset.

The paid trap offers a temporary high but leads to a fiscal dead end. Every dollar spent there is consumable — used once and gone when the auction ends.

When you move that capital into your information infrastructure, you stop paying for the privilege of being ignored. You start building a digital entity that owns its facts, earns trust, and controls its future in the Answer Economy.

Your first step: don’t boil the ocean.

Take your top-performing paid landing page and run the seven-point health check. If it’s a “zombie fact” environment, engineer information gain back into the page.

Stop asking for a ranking report; start asking for an entity audit.

The 2026 organization isn’t defined by how much it spends to rent an audience, but by how much it proves it owns the answer.

You have the blueprints. You have the data. Now stop funding the payday loan and start building answer equity.

What blog posts should you write to be mentioned in ChatGPT?

30 April 2026 at 17:30
Query expansion

Across 90 prompts we tested in ChatGPT, commercial prompts triggered web searches 78.3% of the time. Informational prompts did so just 3.1%.

That gap changes what you should write if you want to appear in a ChatGPT answer.

ChatGPT doesn’t pull every response from the same place. Some answers come from training data; others use live web search — a behavior called query fan-out. The model expands your prompt into multiple background searches, then retrieves and synthesizes across those subtopics. If your page isn’t on those branches, it won’t be pulled in.

So the question is no longer just how to rank. It’s which pages open the fan-out door in the first place.

In our sample, informational pages didn’t. Read on to discover where the system went instead.

We tested 90 prompts across three industries: beauty, legaltech/regtech, and IT. We analyzed prompt intent, downstream query expansion, and the intent those expansions reflected.

Here’s the breakdown and the core finding: most queries aligned with commercial intent, not purely informational prompts.

Why this question matters now and how query fan-outs come into play

Query fan-outs change the content game because the system isn’t limited to the literal prompt.

It expands the request into multiple background searches, then retrieves and synthesizes across those subtopics.

Fan-outs trigger parallel web searches tied to the initial prompt, creating opportunities for retrieval, mention, and link citation.

Multi-query expansion is a core design pattern in modern generative search systems. Google describes AI Mode this way: it breaks a question into subtopics, searches them in parallel across multiple sources, then combines the results into a single response.

That raises a strategic SEO question: should you invest more in top-of-funnel educational content, or in lower-funnel comparison, shortlist, and recommendation content?

This experiment framed that problem.

The objective was to test, across selected industries, where fan-out appears by intent category: informational, commercial, transactional, or branded.

The initial hypothesis was direct: informational prompts wouldn’t trigger fan-out, while commercial prompts would, and those fan-outs would stay at the same funnel level or move lower.

We found that ChatGPT-generated fan-outs are overwhelmingly associated with commercial intent.

Disclaimer: This experiment measures observed prompt expansion behavior in ChatGPT. Google AI Mode is cited only as context to show multi-query expansion as a broader pattern in generative search, not as proof of ChatGPT’s internal architecture.

The setup: what we tested

The core sample includes 90 numbered prompts, heavily weighted toward informational intent.

Prompt intentPromptsShare of samplePrompts with fan-outFan-out rate
Informational6572.2%23.1%
Commercial2325.6%1878.3%
Branded11.1%00.0%
Transactional11.1%00.0%

The sample skews heavily toward informational prompts, with some commercial ones and minimal branded and transactional queries.

We structured the experiment around the sectors in the brief: beauty/personal care, legaltech/regtech, and IT/tech.

The result: commercial prompts triggered almost everything

The main finding is clear.

Out of 90 prompts, 20 triggered fan-out. Of those, 18 were commercial and 2 informational.

Informational prompts made up about 10% of fan-out triggers (2 of 20). When they did trigger expansion, they were rewritten into more evaluative, solution-seeking subqueries.

In other words, 90% of fan-out-triggering prompts in the core sample came from commercial intent.

The contrast is stronger than the raw totals suggest. Commercial prompts triggered fan-out 78.3% of the time; informational prompts did so just 3.1%.

This supports the working hypothesis: in this sample, fan-out was overwhelmingly a commercial phenomenon.

Those 20 prompts produced 42 fan-out queries — an average of 2.1 per triggered prompt.

Of those 42 fan-out queries:

  • 39 were commercial.
  • 2 were branded.
  • 1 was informational.

Even when a prompt triggered expansion, the system usually shifted toward comparison, product evaluation, feature filtering, shortlist creation, or brand-specific exploration — not broad educational discovery.

Methodology: how we performed the analysis

The experiment used 90 prompts across three industries, mostly informational, with a smaller set of commercial prompts and minimal branded and transactional queries.

In the analysis, we have:

  • Selected a representative battery of prompts.
  • Identified the fan-outs.
  • Classified each fan-out by intent.
  • Observed distribution by prompt metadata.

The analysis then followed three steps:

  1. Each prompt was classified according to prompt-intent labels.
  2. We counted the prompts triggering fan-out (at least one).
  3. We inspected the observed expansion queries and their assigned fan-out intent labels.

That produced two distinct but complementary views:

  • A prompt-level view, asking whether a given prompt triggered fan-out at all.
  • A fan-out-query view, asking what kind of intent the downstream expansion actually took.

That distinction matters: the first shows which prompts open the fan-out path, while the second shows where the system goes once it opens.

Interpreting the results: fan-out tends to move down-funnel

The cleanest interpretation is that, in this sample, fan-outs behave less like open-ended topic expansion and more like assisted decision support.

Commercial prompts almost always opened the door.

Once they did, fan-outs usually stayed commercial.

The system expanded into comparisons, feature-based filtering, product lists, pricing-adjacent queries, and brand-specific evaluations.

A few examples make that concrete.

  • “Suggest the best accounting software for small business and explain why” expanded into a commercial comparison query around features.
  • “What are the top AI document management systems for lawyers?” expanded into multiple product-oriented legaltech queries.
  • “What are the best products for skin care?” expanded into a shortlist-style query around product categories and reviews.

The two informational exceptions are even more revealing than the rule.

  • “I need an open-source document management system. What can you suggest?” was labeled informational at prompt level, but the resulting fan-out moved into solution recommendation.
  • “AI tools for legal research and document automation” also moved into a clearly commercial/evaluative downstream query.

So, even when the prompt starts broad, fan-out often translates that breadth into a lower-funnel retrieval path.

What this means for content strategy

The takeaway isn’t to stop writing informational content.

It’s this: informational content alone is unlikely to align consistently with fan-out expansion, at least in this dataset.

If your goal is visibility in AI answers tied to product selection, vendor discovery, or option narrowing, you need stronger coverage of pages and passages that match those downstream commercial branches.

That may include:

  • best-of and shortlist pages
  • comparison pages
  • which tool should I choose” pages
  • feature-led category explainers
  • alternatives pages
  • evaluation FAQs
  • recommendation-oriented paragraphs embedded inside broader educational pages

In practical terms, your content model shouldn’t be just ToFU or BoFU, but ToFU with commercial bridges.

A broad article can still help, but it should include passages the system can easily reformulate into decision-support subqueries.

A purely educational piece that explains a category without naming products, tradeoffs, features, use cases, pricing logic, or selection criteria is much less likely to align with the fan-out paths seen here.

Put simply: Don’t just answer the obvious question — anticipate the next evaluative step the system is likely to generate in the background.

Limitations

This result is directional, not universal.

  • 90 prompts reveal a pattern, but not a stable law of AI retrieval behavior.
  • The prompt mix is uneven. Informational prompts dominate the sample, while branded and transactional prompts are barely represented. That means those findings aren’t proof of absence.
  • The dataset spans industries but isn’t normalized by brand, wording style, or use case. Some sectors may be easier to express in product-discovery language.
  • This is an observational analysis of recorded fan-outs, not a controlled platform-level test. It shows what happened in this prompt set, not how ChatGPT always behaves.
  • Google’s description of fan-out provides context, but this isn’t a Google AI Mode test. It’s a ChatGPT-focused prompt and fan-out dataset. The takeaway is strategic, not architectural.

What to test next

The next version of this experiment should isolate the question more aggressively and expand the dataset.

A follow-up should map triggered fan-outs back to specific content formats.

The goal isn’t just to confirm that commercial intent wins. It’s to identify which page templates and passage structures best cover the fan-out branches AI systems prefer.

How AI models ‘understand’ your brand

30 April 2026 at 17:05
AI brand

I keep hearing people say AI understands their brand. It doesn’t. Let’s get that out of the way first.

What it does is pattern-match at scale. It compresses your positioning, product, proof, and tone into a bundle of signals it can retrieve and remix at speed.

Those patterns come from two places:

  • Training: What the model absorbed historically.
  • Retrieval: What it can fetch at answer time from the live web and other sources.

So “AI SEO” isn’t a new channel. It’s a new representation problem: which version of your brand gets encoded, retrieved, and repeated.

Most brands are already in the game. They’re just not playing with purpose.

The internet is no longer a library

Classic SEO was a library problem. You publish a URL. Google indexed it. A human searched and found it.

AI search is a conversation that stretches out the demand curve. Head terms still drive the majority of visibility, but, ever so slowly, more volume is moving into context-heavy prompts.

  • “With these constraints”
  • “Like this competitor but cheaper”
  • “Which tool fits a team like mine with these requirements?”
  • “Given what you know about me, recommend…”

Your job is to be the most relevant match inside a model’s memory and retrieval pipeline.

Not by being ranked. But by being represented.

AI doesn’t run on opinions. It runs on associations.

From keywords to entities to embeddings

Classic SEO competed for keywords. Then it shifted to entities. AI systems go one layer deeper. They turn entities into vectors.

Your brand becomes a coordinate in dimensional space. Close to some concepts. Distant from others. Pulled by whatever your content and mentions repeatedly associate you to.

If your brand is consistently associated with “enterprise analytics”, “real-time dashboards” and “data governance”, your vector lives near those clusters.

If your messaging sprawls into adjacent territory because someone got bored of writing about the same things, the vector spreads. Precision drops. The model still has a position for you. It’s just fuzzier, less confident, and easier to swap for a competitor with cleaner signals.

Three layers of AI brand visibility

Before you “fix AI SEO,” identify which layer your brand is failing on. The same tactics don’t work everywhere.

Training layer

Your historical footprint. Press, blogs, documentation, reviews, every old thread on a forum you forgot existed.

You can’t fully control it.

But you can reduce fragmentation by finding and editing all possible past mentions (social profiles, directory listings, wikis, etc) to create a consistent identity across the internet.

Understand the training layer by asking an AI chatbot to describe your brand with web search turned off.

Retrieval layer

Your live surface area. Indexed pages, product feeds, APIs. This is where traditional technical SEO of crawling, indexing and rendering matter most. It defines what the AI system can access for citations.

Understand the retrieval layer by running branded intent and market category intents prompts daily using a LLM tracker and reviewing which sources are consistently cited.

Generation layer

That is the output seen in AI Overviews, AI Mode, ChatGPT or whatever your brand gets reassembled in front of an actual customer. Your brand will be written into the answer only if it’s a must. 

So ask yourself, what unique, quotable, additive content forces the LLM to mention you?

Understand the generation layer by using the same LLM tracker data, but reviewing brand mentions within responses and their semantic associations.

Four mechanics that decide what AI says

Think of these as the forces quietly shaping your representation across the layers.

1. Consolidation (identity resolution)

AI systems merge different references to the same brand if it’s obvious they belong together.

Most brands don’t have one clear identity. They often have:

  • A brand name (spaced or cased inconsistently).
  • A legal name.
  • A domain name.
  • An abbreviation.
  • A legacy name.

Humans merge that automatically. Models don’t. They consolidate by pattern, not intent. Every inconsistent self-reference is a vote for fragmentation.

Allow your brand to be written five different ways and split your visibility signals five times.

2. Co-occurrence (association formation)

Models learn what appears together:

  • Brand + category
  • Brand + use case
  • Brand + audience
  • Brand + competitor

Repeat the right pairings, and the association strengthens. Be inconsistent, and it weakens. It’s genuinely that simple.

3. Attribution (who says it, where)

Models track who is being described, by whom, in what context.

Your own site is one layer. Third-party mentions are another. High-trust sources carry more weight.

Not because of “authority” in the classic SEO sense, but because they appear frequently inside reliable contexts in the training data and retrieval corpora. Similar outcome. Different mechanisms.

4. Retrieval weighting (what gets used in AI answers)

When generating answers, AI systems decide which information to use. That decision depends on clarity, relevance, uniqueness, and ease of extraction.

If key facts are buried in narrative copy, implied through metaphor, scattered across sections, the model will simply pull from somewhere else.

On the other hand, if you repeat them, structure them, and make them explicit, you are more likely to be chosen by the model.

You’re not writing poetry, you’re building a graph

In your content, on-page and off-page, make the core entities unmissable. Your brand. Your products. Your categories. Your audience. Your differentiators.

Craft a clear, consistent, canonical positioning that the machine can’t misread by creating a canonical brand bio:

[Brand] is a [market category] for [audience] who need [use case], differentiated by [proof].

Then, honestly ask yourself if your answer could also describe your competition. Or better, ask AI that question. If the answer is yes, rewrite it’s unmistakably you.

Then roll out that positioning everywhere. On-page with “retrieval-ready” chunks, in structured data, in “sameAs” references, industry publications, partner sites, user reviews, community discussions, social posts. 

Repeat key associations deliberately across pages until it feels excessive. Reduce unnecessary variation in terminology. Then the associations strengthen. Are reinforced. Compound.

Beware brand drift, where inconsistencies allow misrepresentations, and a lack of information allows hallucination to creep in. Police all the edges. Consolidate or kill the pages that introduce conflicting descriptions of your brand.

This is not about gaming AI. It is about reducing entropy.

If that sounds boring, good. The brands that win the AI era are not going to win it with cleverness. They are going to win it with discipline.

Because if answers are inconsistent across sources, your brand won’t be cleanly encoded. And the version of you that AI systems are quietly passing along to customers won’t be the one you intended.

First 5 steps to AI brand visibility

  • Write your canonical brand bio: Lock-in spacing, casing, abbreviation rules for the brand name, and clear positioning.
  • Implement graph-based schema: Define relationships between your brand (consolidated by sameAs) and other key entities.
  • Make proof easy to quote: Ensure awards, benchmarks, customer numbers, policies, all notable brand information is explicit and extractable.
  • Fix historical identity fragmentation: Clean up past mentions and enforce canonical positioning everywhere possible.
  • Repeat key associations with intention: Brand + category, use case, audience, vs competitor. Not only on your own site, but also build coverage on high-trust third parties.

It’s not about you

If AI systems can’t confidently represent your brand, they will default to a safer option. Usually, it’s a competitor with cleaner signals. Not because that competitor is “better”. Because that competitor is easier for the machine to use.

AI doesn’t need to understand your brand perfectly. It needs to approximate it well enough to recommend you. Your job is to control that approximation through consistency, structure, and distribution.

Not by publishing more. By making your brand impossible to misunderstand.

Google AI Max gets new controls, Shopping rollout and travel consolidation

30 April 2026 at 17:00
What 23 tests reveal about AI Max performance in Google Ads

Google is doubling down on AI-driven ads just as search behavior shifts toward conversational queries, giving advertisers more automation while trying to preserve control.

What’s new.

AI Max expands beyond Search: Now rolling out to Shopping campaigns and travel-specific formats, broadening reach across more advertiser types.

AI Brief (powered by Gemini): A new interface that lets advertisers steer AI using natural language inputs.

Text disclaimers + URL automation: Compliance-friendly updates to pair with automated landing page selection.

Why we care. Google is making AI Max a core layer across Search, Shopping and Travel, meaning automation will increasingly determine how ads are matched to user intent. This update expands reach into more conversational, high-intent queries that traditional keyword strategies miss, helping brands capture demand earlier in the journey.

At the same time, tools like AI Brief and new compliance features give advertisers more control over messaging and targeting, reducing the risk of fully automated campaigns feeling like a “black box.”

Shopping gets smarter. For retailers, AI Max for Shopping uses Merchant Center data to generate more adaptive ads that can respond to long-tail and exploratory queries, helping brands appear earlier in the discovery phase rather than only at the point of purchase. The rollout is positioned as a simple upgrade for existing Shopping campaigns, suggesting Google wants rapid adoption.

Travel gets consolidated. Travel advertisers get a consolidation play. Search Campaigns for Travel bring previously fragmented formats into a single interface with unified reporting and integrated AI Max capabilities. The move reduces operational complexity while reinforcing Google’s push toward centralized, AI-driven campaign management.

More control with AI Brief. The most notable addition is AI Brief, which attempts to solve a long-standing advertiser concern: lack of compliance control in automated systems. Advertisers can define messaging rules, specify which queries to prioritize or avoid, and shape how different audiences are addressed. The system then generates previews, allowing feedback before campaigns go live.

Automation meets compliance. Google is refining how traffic is directed to websites. Final URL expansion uses AI to select the most relevant landing page for each query, and the new text disclaimer feature ensures required legal messaging remains intact even when automation is active. This signals a push to make AI usable in more regulated industries without sacrificing compliance.

The bottom line. AI Max is evolving from a Search add-on into a foundational layer across Google Ads, combining automation, cross-format reach and advertiser input to adapt to a more AI-driven, conversational search landscape.

💾

Google is scaling AI Max across more campaigns while giving advertisers clearer control over AI-driven targeting and messaging.

AI sees your brand as math, not messaging

30 April 2026 at 16:30
AI brand math

AI may not see your brand the way you think it does, according to Scott Stouffer, co-founder and CTO at Market Brew.

Brands still publish content, optimize pages, build authority, and follow SEO best practices. But that may not be enough anymore.

Search has moved away from a simple battle over keywords, links, and page-level signals. It’s now shaped by meaning, intent, embeddings, and retrieval, Stouffer said during his SEO Week presentation.

In legacy SEO, a page could rank lower and still exist in the search results. In AI-driven systems, the first question isn’t whether you rank. It’s whether you’re ever retrieved.

“If you’re not retrieved, you do not exist to AI,” Stouffer said.

Your brand already exists inside AI systems as a mathematical object. You may call yourself one thing. Your homepage may say another. Your brand guidelines may promise a clear position. But AI systems build their own view of your brand from the content you have published.

That computed version of your brand may be different from the one you intended to build.

Retrieval now matters before ranking

AI visibility begins before ranking, Stouffer said.

In traditional SEO, marketers focus on positions — first, third, or tenth. But AI systems apply a filter earlier. Before anything is ranked, the system determines which content is eligible for consideration.

That is retrieval.

When a user asks a question, the system pulls a limited set of passages or chunks that best match the query. Those passages define the answer space.

If your content isn’t included, you get no impressions, no clicks, and no visibility at all, Stouffer said.

The real shift is moving from exclusion to inclusion.

“You don’t lose. You just never entered the game,” Stouffer said.

AI does not see pages the way SEOs do

AI systems don’t treat a webpage as one clean unit, Stouffer said. They don’t evaluate pages as whole objects or prioritize layout, structure, or formatting.

Content is broken apart. A page becomes chunks: passages, sections, and individual ideas.

Each chunk is evaluated independently. A paragraph deep in a guide can compete on its own. A single sentence can be selected if it aligns closely with the query.

This shifts competition from page versus page to passage versus passage.

Most of a page may never be considered. Only the most aligned chunks are evaluated.

Meaning becomes math

Each chunk is converted into a vector, Stouffer explained.

This vector represents meaning as a position in a high-dimensional space. It captures context and intent rather than exact wording.

Two pieces of content can use different words but sit close together if they express the same idea. Others can share keywords, but sit far apart if they represent different meanings.

“It’s comparing meaning, not wording, measuring distance, not keyword overlap,” Stouffer said.

Relevance is determined by proximity. The closer a chunk is to a query in this space, the more likely it is to be retrieved.

Your content forms clusters

As chunks are mapped into this space, they group together.

Content with similar meaning forms clusters, even across different pages. These clusters reflect how AI systems understand topics.

This understanding comes from how content naturally groups by meaning, not by site structure or labels, Stouffer said.

If content is consistent, clusters become dense and clear. If content is scattered, clusters become fragmented.

What matters is not what a brand intends to say, but what its content actually communicates.

The centroid is your brand to AI

Within these clusters, there is a center point — the centroid, Stouffer said.

The centroid represents the average position of all related content. It reflects the site’s core meaning.

Every page and paragraph influences that position. Consistent content creates a clear, stable centroid. Inconsistent content dilutes it.

That centroid is how AI understands your brand.

Not your homepage. Not your messaging. Not your brand guidelines.

Your centroid is the combined signal of everything you have published, Stouffer said.

“Your centroid doesn’t care about intent. It reflects the math of everything you’ve ever published,” Stouffer said.

Alignment beats isolated optimization

This changes how content should be evaluated.

The key question isn’t whether a page is optimized in isolation. It’s whether it aligns with the rest of the site.

Each page either strengthens the centroid or pulls it in a different direction.

“Optimization without alignment creates drift, and drift is what breaks consistency,” Stouffer said.

As drift increases, the site becomes harder for AI systems to interpret and retrieve.

“You don’t write pages, you project meaning,” Stouffer said.

Retrieval starts with proximity

When a query is entered, the system converts it into a vector, Stouffer said.

It then searches for the closest matches in meaning space.

This includes both individual chunks and the centroids that represent broader content clusters.

If your content is close enough, it enters the candidate set. If it is too far away, it is excluded.

Only after this stage do traditional ranking signals apply.

Content quality, links, and structure matter — but only if the content is first retrieved.

If not, those signals are never evaluated, he said.

Most brands look too similar to AI

Many brands follow similar strategies, use the same sources, and produce similar content.

As a result, their centroids converge in the same region, Stouffer said.

He described this as cluster collision.

When multiple brands occupy the same space, AI systems don’t select all of them. They choose a few and ignore the rest.

“They’re not failing best practices. They’re colliding with everyone else using them,” Stouffer said.

Distinct meaning is the new advantage

Producing more content or improving existing content isn’t enough. If content remains similar in meaning, it remains in the same space.

“You need a distinct centroid,” Stouffer said.

A clear, separate position in meaning space reduces competition and increases the likelihood of retrieval.

SEO becomes a control loop

This is not a one-time adjustment.

Every piece of content shifts the centroid.

That requires an ongoing process of measurement and adjustment, Stouffer said.

Teams need to monitor alignment continuously and correct drift as it occurs.

Over time, this creates a more stable system where new content reinforces the existing structure.

The visibility problem is really an observability problem

Most teams can’t see how their content exists in this system.

They can’t see clusters, centroids, or distances — or why content is excluded.

So they rely on trial and error, Stouffer said.

They publish, optimize, and wait for results. When nothing changes, they try something else.

Without visibility into the system, they react to outcomes rather than understanding causes.

Is AI seeing the brand you think you’ve built?

Your brand already exists as a mathematical object inside AI systems, Stouffer said.

You do not get to choose that.

You only choose whether to measure and control it or let it drift.

AI does not see your brand the way you describe it. It sees the aggregate meaning of your content.

“If you control your centroid, you control your visibility,” Stouffer said.

From links to brand signals: The new SEO authority model

30 April 2026 at 16:00
Links to signals

For more than two decades (nearly as long as I’ve been in SEO), backlinks have been core to SEO. Google’s PageRank changed search by using backlinks as a proxy for trust.

A link wasn’t just a pathway; it was a vote. The more votes you had and the more authoritative the voters were, the higher you ranked.

But as Google and AI systems matured, entity-based understanding emerged. AI models became better at understanding content, context, and credibility without always needing a hyperlink as a crutch.

Today, visibility isn’t driven solely by links. It’s strengthened by the broader signals your brand has earned: how often it’s mentioned, cited, and trusted across authoritative sources.

Search engines and AI platforms now prioritize these signals.

AI’s role in reducing reliance on links alone 

Modern AI systems can evaluate trust and expertise in ways that were impossible a decade ago. AI has changed how authority, trust, and expertise are measured. It can now assess authority through signals once approximated mainly by backlinks.

AI can:

  • Identify entities and map their relationships across the web.
  • Interpret sentiment and contextual relevance.
  • Detect manufactured link patterns with near-perfect accuracy.
  • Understand brand prominence without a single hyperlink.
  • Evaluate reputation signals from reviews, mentions, and citations.
  • Cross-reference information across multimodal sources.

A brand mention in a reputable publication—even without a link—reinforces entity authority. Consistent expert citations validate expertise. These signals can’t be faked.

The result is a new era where links still matter, but they’re no longer the only star. Authority is now a network of signals.

The rise of entity‑first SEO

As Google relies less on raw link signals, something else has increased: entities — the people, brands, organizations, and concepts behind the content. Google increasingly showcases brands based on who they are and how they’re discussed across the web, alongside their backlink profile.

At its core, entity-first SEO means Google and LLMs are mapping relationships: identifying brands, understanding what they’re known for, and evaluating how they’re referenced in trusted sources.

For example, an outdoor gear company with a modest backlink profile began appearing in AI Overviews for “best hiking backpacks” after repeated mentions in Reddit threads, YouTube reviews, and a few expert roundups. Only some mentions included links, but the brand appeared consistently in trusted, topic-relevant conversations. Google interpreted those unlinked mentions as proof of real-world relevance.

If your brand consistently appears in a positive light in topic-related conversations, AI sees that as proof you’re relevant and trusted. The brands that win now have the strongest entity presence.

PR‑style links + editorial = off-page powerhouse

PR-style links and editorial coverage are earned mentions in reputable publications — the kind that signal real-world authority, not algorithmic manipulation.

Why editorially earned links outperform volume-based link building

Old-school, volume-based link building is less effective as AI improves at detecting manufactured patterns. But high-quality, relevance-driven link building—especially when paired with PR signals—is more valuable than ever.

Editorial PR links from journalists, analysts, and industry voices who choose to reference a brand because it’s newsworthy or authoritative reflect genuine credibility. They’re the digital equivalent of a trusted expert saying, “This brand matters.”

Authority-Based Link BuildingVolume-Based Link Building
Strong editorial contextThin or generic content
High topical relevanceLimited relevance
Natural language anchorsOver‑optimized anchors
Trusted authors and publicationsSites with weak editorial oversight
Clear entity associationsObvious link‑selling footprints

AI doesn’t just look at the presence of a link; it evaluates the context around it. Models are trained to reward authenticity. Search aims to reward the most authoritative entities.

Creating multi‑signal authority

The real power comes from a combination of signals. As search has evolved, quality has become more powerful than quantity.

Now AI is driving another shift. You can grow traditional, relevance-focused links alongside new brand signals.

A single earned placement done well can generate:

  • Brand mentions that reinforce entity recognition.
  • Citations that validate expertise.
  • Positive sentiment that strengthens trust.
  • Topical associations that build relevance.
  • Valuable hyperlinks for foundational growth.
  • Entity reinforcement across the Knowledge Graph.
  • Secondary coverage as other sites pick up the story.

This is multi-signal authority — holistic credibility that AI systems are designed to reward. It tells Google and LLMs: you’re known, trusted, and relevant. You need to be part of the conversation.

As powerful as PR signals are, they’re only one part of a larger authority ecosystem. AI evaluates brands through a multi-signal trust profile that determines visibility.

Breaking down the new authority stack

Authority is now defined by the breadth and consistency of signals that validate who your brand is across the web. It’s evaluated as humans do: reputation, recognition, expertise, and prominence.

Authority is no longer a single metric tied to links. It’s a network of signals, including:

  • Brand strength: Rising branded search volume, navigational queries, and direct traffic patterns that signal real-world recognition. 
  • Entity validation: Consistent NAP details, schema markup, and unified profiles help confirm your brand and connect references back to the same entity.
  • Topical authority: Depth of content, subject-matter experts, and external collaboration to show your brand is genuinely knowledgeable about the topics you discuss.
  • Reputation signals: Reviews, citations, third-party mentions, and sentiment patterns that reflect trustworthiness. 
  • PR signals: News coverage, interviews, podcast appearances, and industry mentions that reinforce your brand’s relevance.

Together, these signals create a holistic authority profile that AI can interpret. The brands that win have the strongest multi-signal authority footprint.

Brand strength is the silent factor

Brand strength quietly outweighs other signals. The data shows it: brands in the top 25% for web mentions average 169 AI Overview citations, while the next quartile averages just 14.

That’s not a small gap.

This aligns with Ahrefs’ analysis of ~75,000 brands. The strongest correlations with appearing in AI Overviews were branded web mentions, branded anchors, and branded search volume—all signals of real-world brand presence.

Consider two competing fitness apps. One has thousands of backlinks from generic listicles. The other is frequently mentioned in Reddit threads, YouTube reviews, and TikTok “day in the life” videos. The second app appears consistently in AI Overviews because AI sees it as part of the real-world fitness conversation, not just the link graph.

The brands dominating AI Overviews have the strongest brand presence, supported by consistent links, mentions, citations, and contextual relevance.

Predictions for 2027 and beyond

By 2027, link building will undergo radical change. The shift from a numbers game to a confidence game will become the norm, and Share of Authority or Voice will be the new metric.

Here are my top three predictions for what’s next.

Prediction 1: Visibility will be measured by a “Share of Model” metric. AI rewards signal density, not link density.

Link building will expand to include “seeding” information in AI training hubs. Instead of mass outreach to low-tier blogs, strategies will target user-preferred sources like Reddit, LinkedIn, Substack, and GitHub, which LLMs use for high-quality, human-led data.

Brands that appear most often in training data, trusted sources, and high-authority conversations will earn visibility. This is the next step in a world where signals determine authority.

Traditional MetricPredicted MetricWhy the Change
Backlink CountEntity Citation FrequencyAI values brand mentions as much as links
Domain Authority (DA)Source Reliability ScoreFocus on the trustworthiness of the source
Anchor TextSemantic ContextAI reads the intent around the link, not just the text
PageRankShare of Model (SoM)Success is being the AI’s preferred answer

Prediction 2: Brands will act as primary newsrooms as proprietary data generates the strongest authority signals.

As AI systems rely more on multi-signal authority, proprietary data becomes one of the most powerful assets a brand can produce. Data isn’t just content — it’s a signal engine. It naturally earns the signals AI trusts most:

  • PR coverage.
  • Citations.
  • Mentions.
  • Social discussion.
  • Co‑occurrence with authoritative entities.
  • Long‑tail references in future content.

Traditional link building still provides foundational authority, but data-driven assets are the accelerant. They create high-trust, high-context signals that AI models weigh heavily.

On a platform where visibility depends on how often your brand appears in authoritative contexts, proprietary data is the most scalable way to increase your Share of Authority.

Prediction 3: Unlinked brand mentions will become one of the most valuable authority signals

Traditional contextual links will continue to build the foundation. But beyond that, search engines will track every time your brand appears alongside specific topics. Links will need “semantic context.”

Every mention of your brand in news, podcasts, reviews, forums, social posts, and roundups becomes a signal that strengthens your entity.

AI isn’t replacing link building — it’s expanding it

The future of off-page SEO isn’t a battle between traditional link building and AI-driven signals. It’s the realization that links were always just one signal. Now search engines can understand dozens more.

Traditional link building still matters. It provides the foundational authority, crawl paths, and topical relevance every site needs.

AI has widened the field. It can read context, interpret sentiment, understand entities, and evaluate brand presence.

These signals don’t replace links — they amplify them.

Links built the foundation.

Signals build the skyscraper.

The latest jobs in search marketing

1 May 2026 at 22:47
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
  • Job Description Attention: Kapitus is aware that individuals posing as recruiters may be communicating with job seekers about supposed positions with Kapitus. Kapitus has received reports that the content and method of communication can vary, but messages may contain requests for payment (e.g., fees for equipment or training) and/or for sensitive financial information. Kapitus will […]
  • Job Description Benefits: Competitive salary Health insurance Opportunity for advancement Paid time off Training & development Digital Marketing Specialist (SEO Focus) Company: Direct Clicks Inc. Job Type: Full-Time or Hourly Based on Experience Location: Remote Candidates must be located within driving distance of Roseville, Minnesota for occasional in-person team meetups. About Direct Clicks Inc. Direct […]
  • Remote (Canada-wide) · Full-time · $75,000–$90,000 CAD About Webserv Webserv is a digital marketing agency that helps mission-driven businesses — particularly in behavioral health — grow through SEO, paid media, and conversion-focused web strategy. We’re a tight-knit team that values curiosity, ownership, and the kind of work that actually moves the needle for our clients. […]
  • The Basics: Growth Plays is hiring a Senior SEO/AEO Manager based in the US, Canada or LATAM, to support and manage ongoing customer engagements and relationships. You’ll act as the main point of contact for your clients, and focus on building relationships and trust while driving strategy-aligned growth for the long term. This role is […]
  • Company: Local Leads DigitalLocation: RemoteJob Type: Contract, 1099Compensation: 100% Commission, Uncapped Job SummaryLocal Leads Digital is hiring an Independent Sales Representative to help grow adoption of the L.O.C.A.L. Tool, our local SEO fulfillment solution. This is a fully remote, 1099 independent contractor opportunity for someone who is confident in outbound sales and comfortable building their […]
  • We Are: NoGood is an award-winning, tech-enabled growth consultancy that has fueled the success of some of the most iconic brands. We are a team of growth leads, creatives, engineers and data scientists who help unlock rapid measurable growth for some of the world’s category-defining brands. We bring together the art and science of strategy, […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • Director, Global Digital Marketing, Integrated Marketing Communication (IMC) Team Position Overview The Director of Digital Marketing is at the center of 10x Genomics’ digital marketing engine, delivering measurable business impact and innovating across channels to ensure leadership in scientific markets. This position reports to the Vice President of Integrated Marketing Communications as is responsible for […]
  • This role offers you the opportunity to deepen your SEO expertise and develop your leadership skills within a tight-knit agency team. Sr. SEO Analysts lead our client relationships and bring our outcome-driven strategies to life. They are responsible for delivering value and results to our clients through their high-quality work, commitment to building deep SEO […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Join Our Vibrant Team at Food Trends Catering & Events! About Us: For over two decades, Food Trends Catering & Events has been redefining the art of off-premises catering from our midtown Manhattan headquarters. Whether it’s a cozy dinner for ten or a dazzling gala for over 1,000 guests, we deliver excellence with every bite […]
  • Digital Strategist, Paid Social Media Digital Media | New Media & Internet | Marketing & Advertising Hybrid | New York, NY | United States (U.S.) $65,000-$90,000 Base + Benefits A leading, innovative digital media company seeks an experienced Digital Strategist, Paid Social Media to successfully manage paid social campaigns, analyze campaign performance, and report campaign […]
  • Job Description Job Description About the Role This is not a traditional job. It’s a fast-track growth program designed for digital paid media professionals who want to accelerate into leadership and diversify into an omni-channel marketer. At WITHIN, we’re looking for professionals with 2+ years of hands-on experience in paid media (paid social and/or paid […]
  • Job Description Job Description Description: Director of AI Demand Generation Guideline | Toronto, New York, or Chicago Full-Time | On-Site / Hybrid About Guideline Guideline is a global provider of ad intelligence and media plan management technology, powering the strategy, planning, and management of advertising buying and selling for the world’s leading enterprises. Our solutions […]
  • Job Description Job Description Company Description OUR STORY: Equinox Group is a high growth collective of the world’s most influential, experiential, and differentiated lifestyle brands. We restlessly seek what is next for maximizing life – and boldly grow the lifestyle brands and experiences that define it. In addition to Equinox, our other brands, SoulCycle and Equinox Hotels are all recognized […]

Other roles you may be interested in

Manager, SEO, KINESSO (Hyrid, New York, NY)

  • Salary: $90,000 – $95,000
  • Manage senior analysts and help analysts grow into the next level of their career.
  • Translate clients’ business goals and marketing objectives into successful search engine optimization strategies.

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $115,000 – $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Senior Marketing Manager, Vanguard Renewables (Remote)

  • Salary: $120,000 – $182,000
  • Work closely with CMO and RNG team to develop and execute a strategic marketing roadmap aligned with business priorities.
  • Serve as the primary marketing liaison for RNG team, acting as the connective tissue between the Marketing and Commercial groups.

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌