Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerce’s Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.
Why we care. AI Max isn’t a minor update. It’s Google’s most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, that’s both an opportunity (possible growth) and a risk (an efficiency tradeoff).
By the numbers. The result of the analysis:
Median revenue: +13%
Median CPA: +16%
ROAS range: +42% to -35%
Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.
Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely won’t follow, Ryan concluded
What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction — bringing PMax-style automation into classic Search. The result is three core features:
Search Term Matching (broad match expansion plus keywordless targeting),
Text Customization (dynamic ad copy), and
Final URL Expansion (automated landing page selection).
Four pitfalls Smarter Ecommerce identified:
Broad match cannibalization: Up to 63% of the time, recycling existing coverage rather than finding new queries.
Competitor hijacking: In one account, AI Max scaled so aggressively into competitor brand terms that it consumed 69% of total Search impressions.
Reporting overload: Search term and ad combination reports can run to tens of thousands of rows, making manual auditing nearly impossible without automation.
Search Partner Network blowouts: One campaign saw half a million monthly impressions land on SPN at a 0.07% conversion rate, versus 3.04% on standard Google Search.
Between the lines. Google’s 14% uplift stat conspicuously excludes retail — an omission Ryan flags as significant for ecommerce advertisers. There’s also a deeper irony: you’re most likely to adopt AI Max if you’re already running Broad Match, DSA, and PMax — yet Google says those accounts will see the lowest incremental benefit.
What’s next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.
Ryan recommends activating AI Max’s keywordless features in your existing Search campaigns now and beginning to wind down DSA — not migrating it to PMax.
Ryan’s verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and don’t let FOMO around AI Overviews drive your decision.
Has OpenAI’s increasing independence from Microsoft and, by extension, Bing, become an overly dependent relationship with Google?
Our study comparing shopping query fan-outs (QFOs) in ChatGPT from both Google and Bing carousels appears to have provided at least a partial answer to that question. Let’s take a look at how this study was conceived and what we found.
Brief shopping fan-out background and technical explainer
In November 2025, a few researchers in the AI research space, including myself, detected a mysterious field in ChatGPT’s source code: id_to_token_map. But what that field revealed when decoded was even more intriguing.
This field is what’s called base64 encoded, but when we decoded it, it revealed what looked to be Google Shopping parameters, such as productid, and offerid, but also language/locale parameters. Even more interesting? This field revealed a query used to look up that particular product.
To categorically prove this was indeed a Google Shopping link, we would have to be able to reconstruct the shopping URL solely from the extracted parameters.
Let’s look at an example of what this looks like using the ChatGPT product carousel for the prompt “best smartphones under $500.”
If we decode the relevant field, we can recreate the Google Shopping link from the extracted parameters.
The big question was: Would this link correspond to the exact product in the ChatGPT product carousel? So we tried it:
It turns out that, in fact, yes it does!
But this decoding technique alone doesn’t answer any of these important questions:
Is this retrieval process uniform across diverse product categories?
Does ChatGPT select from a certain number of Google product positions?
Does ChatGPT favor higher Google Shopping product positions?
How common is this process at scale?
Was this just a fluke or, given a large enough dataset, could we match these products with any online retailer or even Bing Shopping results?
Using Peec AI data, the following study aimed to robustly prove once and for all that ChatGPT does indeed mainly source from Google Shopping.
To do this we analyzed more than 40,000 carousel products and 200,000 organic products from each Google and Bing. By comparing the similarity of the products, we got a very clear picture of what was really happening behind the scenes. Let’s dig into our findings.
Are shopping query fan-outs really that different from normal search query fan-outs?
To answer whether shopping query fan-outs are different from normal search query fan-outs, we analyzed 1.1M shopping query fan-outs from Peec AI data and compared them to the normal search query fan-outs for the same user prompt. We found that they are almost always different:
Shopping QFO unique to user prompt
99.70%
Shopping QFO unique to normal query search fan-out
98.31%
To dive deeper, we explored the average word counts of both of these query fan-out types by calendar week.
The chart below clearly shows that normal fan-outs are significantly longer — 12 vs. seven words. That makes sense since search query fan-outs are used to retrieve contextual information. This means they need to be long enough to retrieve web results that are specific to the user prompt. Vector search (or comparing embeddings) works best with more context.
Shopping fan-outs, on the other hand, typically target a specific shopping results page and therefore do not need to be as long. It appears the main goal is to retrieve products based on the shopping fan-out. Rather than compare chunks of text, the data in this study supports the hypothesis that ChatGPT relies heavily on Google organic shopping results to populate its carousel.
Further evidence of the distinct nature of the shopping fan-outs surfaces when we look at how many are used per prompt. On average, 2.4 search fan-outs are used per prompt vs. just 1.16 for shopping fan-outs. For reasons similar to above, retrieving more contextual information often requires more search fan-outs vs. simply retrieving products. To populate an eight product carousel in ChatGPT, it seems that, for the most part, one page of Google Shopping results is enough.
How similar are ChatGPT Carousel products to Google Shopping products?
To answer this question in the fairest possible way, we extracted around 5,000 ChatGPT carousels comprising 43,000 products from the Peec AI dataset. Prompts were chosen to be as diverse as possible (see Methodology for the creation process).
We then extracted the organic shopping pages and retrieved the top 40 organic products for both Google and Bing shopping results. Paid ads and sponsored products were excluded from the analysis.
We used a three-step matching algorithm (see Methodology for exact details) to attain a similarity score between the ChatGPT product title and the title found in organic shopping results. This is because not only is ChatGPT probabilistic, but so is, to a certain extent, Google Shopping. Product titles can be rewritten with or without certain product features and results are very sensitive to the exact proxy location where the results are retrieved.
We counted a product as matching if it reached a threshold of 0.8 or above, effectively, if it was the same brand and product name and exhibited a very high degree of similarity.
The results are summarized in the chart below.
Impressively, across 43,000 highly diverse ChatGPT carousel products, 45.8% were found to have an exact title match in the corresponding Google top 40 organic shopping products for that exact shopping fan-out.
For Bing, this exact match rate was just 0.48%.
If we simply look at the percentage of strong product matches across all eight ChatGPT carousel positions, over 83% were found in the Google top 40 products, but that number drops to just under 11% for products found on Bing. This is very strong evidence that ChatGPT sources its carousel products from organic Google Shopping results.
We also see a very high number of weak matches in Bing at over 62%. This implies that the top 40 returned products for each shopping fan-out differ significantly across Google and Bing. This makes sense as there are many 1000s of possible combinations of brand and product that can be surfaced in shopping results.
Even if Bing found around 11% of ChatGPT carousel products, how many of those products were only found by Bing? Across the 43,000 carousel products Bing only found 70 that were not found in Google Shopping, constituting just 0.16%. This means that in almost every case there was a match in Bing there was also a match in Google.
It seems unlikely, then, that ChatGPT is also sourcing products from Bing Shopping in the vast majority of cases.
How does the ChatGPT carousel position affect the match rate?
Here we explore the most common positions (mean and median shown) of Google shopping product positions for each ChatGPT carousel position:
For example, for the first carousel position we can see that the average Google Shopping position is around five. Note that we see a sloping trendline for the carousel positions that correspond to higher Google Shopping positions. This implies that ChatGPT sources top carousel products from higher Google Shopping positions.
Plotted another way, we can visualize the cumulative number of strong matches across organic Google Shopping positions. This chart allows us to see that 60% of the strong product matches are found in the top 10 Google shopping results alone.
Comparing the top 20 vs. positions 21-40, ChatGPT’s favoritism for higher positions becomes clear, with an overwhelming majority of matches (almost 84%) coming from the top 20:
Finally, we explored whether the prompt being branded vs. non-branded made a difference to the product matching results.
The results show a similar high level of product matching for both branded and non-branded prompts, with only slightly higher match rates for non-branded:
Summary of findings
This study analyzed over 43,000 ChatGPT carousel products across 10 industry verticals and compared them against 200,000+ organic shopping results from both Google and Bing. The findings painted a clear picture.
ChatGPT sources its carousel products from Google Shopping, not Bing
Over 83% of ChatGPT carousel products were found as strong matches in Google’s top 40 organic shopping results. For Bing, that figure was just 11%, and of those, only 70 products across the entire dataset (0.16%) were found exclusively in Bing. In almost every case where Bing returned a match, Google had already returned the same product.
Product retrieval and contextual retrieval are separate processes
The data strongly supports this. Shopping query fan-outs are distinct from normal search fan-outs 98.3% of the time. They are significantly shorter (seven vs. 12 words), and ChatGPT uses far fewer of them per prompt (1.16 vs. 2.4 words). This makes sense; populating a product carousel is a fundamentally different task from gathering contextual information to construct a written answer. One is about retrieving structured product listings from a shopping index while the other is meant to retrieve web pages rich enough in context for vector search and re-ranking to work effectively.
ChatGPT favors higher Google Shopping positions
The data shows a clear positional bias, with 60% of strong matches coming from the top 10 Google Shopping results and nearly 84% from the top 20. ChatGPT carousel position correlates with Google Shopping rank, meaning products that rank higher in Google Shopping are more likely to appear earlier in the ChatGPT carousel.
This points to systemic architectural behavior
Since these patterns hold across branded and non-branded prompts, and across all 10 verticals tested, this reinforces that this is a systematic architectural behavior rather than a category-specific or query-specific artifact.
What this means
For brands and retailers, the implication is straightforward: Your Google Shopping ranking strongly influences whether your products make it into ChatGPT’s carousel. These findings indicate that the selection set of carousel products in many cases is effectively the top 40 organic Google Shopping positions for the corresponding shopping fan-out query.
But while product ranking in Google Shopping plays a role, it doesn’t tell the full story. It is likely that other factors, such as overall product mentions and sentiment in the context sources retrieved, also factor into the final ChatGPT carousel selection and ranking.
Understanding the full picture in terms of how your products are perceived across relevant sources, as well as how you show up on Google Shopping, could be the key to understanding ChatGPT product carousels.
For the AI research community, this study provides robust, large-scale evidence that ChatGPT’s product carousel operates as an independent retrieval pipeline for the selection set of products, separate from the contextual web search that powers the written portion of its responses. It is possible, and even likely, that for the final selection and ranking of products, ChatGPT uses contextual clues such as product sentiment from the sources retrieved by the normal search fan-outs.
As always, this represents a snapshot of current behavior. OpenAI could change its retrieval sources or methods at any time, but this behavior has been consistent in our findings for at least the last four months.
Methodology
Objective
Measure how much product overlap there is between ChatGPT Shopping (via product carousels) and Google Shopping organic results for the same queries, across 10 industry verticals. This was contrasted to Bing shopping results as a control using an identical pipeline.
Specifically, the study evaluated:
How often ChatGPT recommends products that also appear in Google Shopping results
Where those overlapping products rank in each system
PromptSet creation
Prompts were created with the purpose of triggering ChatGPT carousels. To maximize diversity, a mixture of branded and non-branded prompts were used, as well as prompts that explicitly included a price and ones that did not.
Additionally, a diverse selection of verticals were chosen to make the findings more robust. These were: Apparel & Footwear, Baby & Kids, Beauty & Personal Care, Electronics, Home Improvement, Home & Kitchen, Office Supplies, Pet Supplies, Sports & Outdoors, Toys & Games.
Product matching
The product matching algorithm compared ChatGPT product titles against the top 40 Google Shopping titles using a three-stage cascade approach
The goal was to find the best match between a ChatGPT product title and the corresponding Google Shopping titles. A match was determined using a cascade of three stages:
Stage 1: Exact match
Method: Case-insensitive string equality after removing whitespace
Score: 1.0
Label: exact
Stage 2: Near-exact match
Method: Uses the Python SequenceMatcher ratio on lowercased strings
Trigger: Activated if the best ratio across all candidates is 0.95 or higher
Purpose: To catch minor, trivial differences like spacing, punctuation, or different types of dashes
Score: The SequenceMatcher ratio (rounded to three decimal places)
Label: near-exact
Stage 3: Hybrid match
Method: A weighted average combining character-level similarity and token (word) overlap
Components and Weights:
SequenceMatcher Ratio (Character Similarity): 40% weight.
Token Overlap (Word Inclusion): 60% weight (fraction of tokens in the shorter title found in the longer one)
Selection: The candidate with the highest hybrid score is chosen, regardless of a specific threshold
Score: Calculated as (0.4 * SequenceMatcher Ratio) + (0.6 * Token Overlap) (rounded to 3 decimal places)
Label: hybrid
This approach was set to be fairly conservative, and 0.8 was determined as a reasonable threshold for a product match as this often corresponds very closely to the same brand and product.
Real examples of matching thresholds from the data:
Match threshold
Description
ChatGPT product
Google Shopping
Differences observed
1.0
Exact string match, no differences
Hot Wheels RC 1:64 Mustang GTD
Hot Wheels RC 1:64 Mustang GTD
None
0.95
Near exact, minor differences such as hyphen, punctuation only
Learning Resources Snap-n-Learn Matching Dinos
Learning Resources Snap‑n‑Learn Matching Dinos
The hyphen character is different in unicode
0.9
Same brand and product, additional non-crucial words allowed
Block Tech 250 Piece Set
Block Tech 250 Piece Building Blocks Set
“Building” added to blocks, but product and brand are the same
.85
Same product and brand, potentially slightly different word order and additional, non-crucial words
LEGO Japanese Red Maple Bonsai Tree
Japanese Red Maple Bonsai Tree LEGO Botanicals
Different word order and one additional word “Botanicals,” same product and brand
.8 good match threshold Same brand, same product
Same brand and product, possibly additional descriptors
Cards Game Against FRIENDS – Limited Edition
Cards Game Against FRIENDS – Limited Edition – Party Card Games For Adults
Same brand and product with additional descriptors that don’t affect the match
.75
Same brand and product line, very minor product differences such as size or dimensions
My Sweet Love 14-inch My Cuddly Baby Doll
My Sweet Love 8-Inch MinWeBaby Doll
Same brand and product line but different size dimension
.7
Same brand, often slightly different product, but within same category
Adventure Force Ram Truck RC Car
Adventure Force McLaren 765LT RC Car
Same brand and product category but different individual product
.65
Same brand, often slightly different product but within same category
Mattel 300‑Piece Puzzle
Mattel 80th Anniversary Puzzle
Same brand and product category but different individual product
.6
Typically same product category, but often different brand and product line
Tell Me Without Telling Me Party Card Game
Elimino! Card Game
Different brand and product line, the same overall category of “card game”
.55
Similar product category but usually not either different brand and/or different product
Furby Interactive Plush Toy Interactive Digital Pet Toy
Interactive Digital Pet Toy
Different brand, similar product category but different specific product
For 20 years, the web has run on a simple trade: publish content that meets a person’s needs, rank in search, earn traffic, then monetize that traffic through products, services, affiliate referrals, or ads.
Zero-click answers and AI search are rewriting that relationship. The new question is whether AI will cite you as a source — and whether that visibility can turn into revenue.
To understand who gets included and who gets routed around, I ran over 200 AI visibility audits across 10 industries.
The pattern was consistent: Most sites are easy to parse, but hard to justify citing. And the industries that rely on discovery traffic the most are often the ones making themselves the hardest to access.
How the audit was conducted
I ran 201 audits using the same rubric and captured an overall AI visibility score, plus four subscores:
Freshness.
Structure.
Authority and evidence.
Extractability.
The dataset included 201 audits across 10 industries:
Coupons.
Affiliate reviews.
Travel booking.
Local directories.
Personal finance comparison.
Health information.
Legal directories.
Online courses.
Job boards.
Recipes.
Note that there was a page type skew — the sample is homepage-heavy (131 homepages, 13 articles, with the remainder a mix of pages). That matters because homepages tend to be marketing-heavy and evidence-light.
I also tracked access failures because “error” results are part of the story. 38 of the 201 audits (18.9%) returned an error, meaning the agent was likely blocked or couldn’t reliably access the content.
An additional eight audits were technically processed but scored 0 due to missing subscores, consistent with partial extraction or app-style rendering that yields little accessible content.
When I summarized score distributions, I focused on the successfully processed audits (163 sites), so “cannot access” didn’t get mixed with “low quality.” I treated error rate by industry as its own signal because it indicated whether AI systems could reliably use a site as a source.
The table below shows how the industries in the dataset performed in the audits.
Rank
Industry
Error rate
Median overall
Median authority
Median extractability
At risk
1
Travel booking and trip planning
33.3%
45.5
31.0
52.0
High
2
Job boards and career marketplaces
40.0%
64.0
44.0
74.0
High
3
Legal directories and lead gen
35.0%
63.0
44.0
74.0
High
4
Coupons and deals
20.0%
62.0
36.0
74.0
High
5
Local directories and lead gen
5.3%
64.0
38.0
74.0
Medium
6
Online courses and learning marketplaces
30.0%
67.5
46.5
80.0
Medium
7
Health info and symptom lookups
15.0%
69.0
52.0
80.0
Low
8
Personal finance comparison
5.0%
67.0
52.0
78.0
Low
9
Affiliate product reviews
0.0%
69.5
54.0
74.0
Low
10
Recipes and cooking content
5.0%
75.0
55.5
81.5
Low
What the audits actually revealed
The findings show that most websites aren’t built to be cited consistently. Here are the three numbers that matter.
Access is a bigger problem than most teams think
38 of 201 sites (18.9%) returned an error. In some categories, it was far worse: job boards (40%), legal directories (35%), travel booking (33%), and course marketplaces (30%). In those spaces, a third to nearly half of the market is effectively AI-dark by default.
Legal directories had the highest AI blocking of any industry.
Most sites are stuck in the middle
Across the 163 processed audits:
Average overall score: 61.6
Median overall score: 66
70.6% landed in “Inconsistent visibility” (60 to 79)
Only 4.9% reached “Strong foundation” (80 to 94)
0% hit “Exceptional” (95 plus)
Translation: Most brands aren’t built to be reliably used and cited.
The gap is proof, not formatting
Median subscores across processed audits:
Structure: 92
Extractability: 74
Authority and evidence: 48
Freshness: 45
Most pages are easy to parse. Far fewer are easy to justify citing. Two repeated findings explain why:
“No last modified header detected” showed up 114 times (machine-readable freshness is missing).
Citations or outbound references appeared only 13 times (machine-readable proof is rare).
That should change how you think about risk. More than losing traffic, the bigger threat is being removed from the consideration set.
Industries disappear for three reasons. You can think of them as three failure modes.
1. Access failure: AI can’t reliably reach your content
If agents can’t consistently access your content, the model has less to work with and will either route around you or fill in the gaps from other sources.
What access failure looks like:
Bot protections, rate limiting, or web application firewall (WAF) rules that treat agents as hostile.
App-style rendering where meaningful content never arrives in initial HTML.
Content gated behind prompts, popups, or scripts that don’t resolve cleanly.
Why this causes vanishing:
If AI systems can’t reliably extract, they can’t reliably cite.
The user’s intent still gets satisfied — it just gets satisfied by someone else’s crawlable content or a native AI answer.
2. Trust failure: AI can read you, but can’t justify citing you
Trust failure is quieter. The agent can access your page, parse it, and summarize it, but the page doesn’t provide enough proof for the model to confidently cite it as a source.
This was the dominant pattern in the completed audits. In plain language: Your content is readable, but it isn’t defensible.
The clearest proof of this showed up when I compared page types:
Median authority score on article pages: 76
Median authority score on homepages: 45
A polished homepage isn’t proof. If you want to be cited for anything beyond your brand name, a typical homepage alone isn’t enough. Evidence usually lives in articles, explainers, data pages, policy pages, and methodology pages.
3. Utility failure: Even if you’re visible, the click may not happen
Utility failure is the most painful. You might get included. You might get cited. But if your value is only information, AI can compress it into an answer, and the user never needs to visit your site.
Visibility determines whether you appear in the conversation. Utility determines whether appearing turns into revenue.
A practical way to think about it:
If your page answers the question, AI can replace the page.
If your product or service completes the job, AI still needs you.
Access failure gets you excluded. Trust failure gets you skipped. Utility failure gets you summarized.
Why certain industries show up as vulnerable
Once access, trust, and utility get viewed together, the vulnerable industries stop looking random.
The categories that repeatedly showed high risk in my dataset share three traits:
Access is inconsistent (blocking and extraction problems).
The content is easy to compress into a single answer.
The business has no next step value once the answer is delivered.
That’s why travel booking, job boards, legal directories, and coupon sites clustered as the most exposed categories in this dataset.
The bigger takeaway? Your website can be built in a way that invites exclusion, even if your business is healthy.
Some industries will feel this harder than others. A site funded primarily by high-volume informational traffic is more exposed to zero-click behavior. But even in those categories, the path forward is to stop selling information alone.
The big mistake right now is treating AI search like a ranking update, when it’s an economic update. The audits made two things obvious:
Many industries are making themselves hard to access, which guarantees the model will route around them.
Even when the model can read a page, it often can’t justify citing it because proof is missing.
The threat is invisibility. You don’t win by hiding. You win by becoming cite-worthy and by building something the user still needs after the answer is delivered.
Trust plus utility is the new moat. Anything else is just playing from yesterday’s playbook.
How content is structured in an article or blog post might not seem controversial. But, apparently, Google doesn’t want you to create bite-sized chunks of content simply to please LLMs. Called “chunking,” this technique helps get your content noticed by AI models and reflects how readers actually engage with online content.
Chunking may make content more retrievable or citable in AI search, but ultimately, it improves the flow of content and makes concepts easier for people to understand. Let’s talk about how chunking works and when to use it.
What is chunking?
Chunking is the practice of organizing text into distinct, self-contained units of meaning. When content is chunked, information is segmented so each paragraph focuses on a single idea and contains everything the reader needs to understand the basics of that idea simply and quickly.
Someone should be able to read a single paragraph and grasp the concept without having to hunt for context in the surrounding words.
Does chunking help AI or people?
The recent criticism from Google suggests that the practice of chunking over-optimizes content, specifically so that it will show up in AI answers. The idea that people are writing specifically for AI assumes that what’s good for AI is somehow bad for human readers.
But really, chunking helps communicate ideas for both readers and search retrieval systems. When content is chunked, it doesn’t dumb down or artificially fragment ideas. It organizes information to match how people actually read online content, making articles easier to scan.
Chunking also helps AI systems because they operate at the passage level rather than the page level. For example, when a system needs to identify an answer for “how to measure keyword cannibalization,” a heading that says exactly that, followed by a focused paragraph, would create a clear match.
In contrast, when an answer to that same question is buried in a dense paragraph covering three other topics, that information gets diluted. The AI might see relevant keywords, but if the text meanders between ideas, it will have a lower confidence that the passage definitively answers the query.
Clear structure creates clear meaning.
Chunking helps both readers to scan content and AI systems to accurately identify what your content says.
When writing from scratch, integrate chunking into your process from the start.
However, it may not be worth your time to edit existing content solely to chunk it. You may find that some articles already follow chunking principles, even if they weren’t explicitly planned to do so. Others may be out of date or poorly structured, requiring more substantial rewrites.
If you want to chunk existing content, prioritize pieces that:
Receive significant traffic but have high bounce rates or low engagement.
Rank well, but aren’t being cited.
Cover complex topics where readers need to find specific information quickly.
Serve bottom-of-funnel audiences making decisions based on specific details.
Skip chunking edits for content that:
Already performs well and receives AI citations.
Is scheduled for comprehensive rewrites in the near future.
Covers topics where narrative flow matters more than information retrieval.
If you have content that is impactful because it creates an emotional arc, chunking or breaking it down into discrete chunks could hurt the piece. If your content succeeds by carrying readers through a journey rather than letting them jump to an answer, preserve that flow.
For example:
Thought leadership that builds to a provocative conclusion.
Opinion essays that require context before the thesis lands.
A chunk in a piece of content should be long enough to explain one thought. This often results in shorter paragraphs — the defining feature is a singular focus, not the word count.
These focused paragraphs sit under clear headings. The heading tells the reader what to expect, and the chunks beneath it deliver on that expectation.
Build chunking into your content outline
To include chunking in your writing, the most effective approach is to integrate it from the start.
Define for yourself or other writers which ideas or concepts in a given topic constitute a chunk, focusing on paragraphs and heading descriptions.
If using content briefs, make it clear in your outlines that each H2 or H3 should cover one complete concept and the content under that heading should fully explain the concept.
How to edit existing content into chunks
Focus your efforts on high-value pages first when editing existing content. Prioritize pages that receive traffic but struggle with engagement or pages that rank well but aren’t being cited.
Evaluate your heading structure: Do your H2s and H3s clearly say the information that each section contains? If not, rework the overall structure of an article first, to include the main points of the topic. Add paragraph chunks for any new subheadings.
Look for paragraphs that contain multiple ideas and break them apart: Each paragraph should stand on its own as a complete thought without depending on other ideas.
Edit the article to delete any extra information: Make the paragraphs concise. Focus only on relevant information for each chunk.
To chunk or not to chunk?
Don’t let Google convince you that chunking is a hack. Chunking makes content work better for everyone and everything — from readers scanning for specific information to AI systems matching queries to answers.
You’ve probably heard developers talk about the DOM. Maybe you’ve even inspected it in DevTools or seen it referenced in Google Search Console.
But what, exactly, is it? And why should SEOs care? Let’s take a look at what it is, why it’s important, and how to best optimize it.
What is the DOM?
The Document Object Model (DOM) is a browser’s live, in-memory representation of your webpage. It acts as the interface that allows programs like JavaScript to interact with your content.
The DOM is organized as a hierarchical tree, similar to a family tree:
The document: This is the root of the tree.
Elements: HTML tags like <body>, <p>, and <a> become branches (or “nodes”).
Relationships: Elements have parents, children, and siblings.
This hierarchy is critical because it allows the browser (and search engines) to understand the relationship between different parts of your content. For example, proper hierarchical order lets your browser understand that a specific paragraph belongs to a specific heading.
How to inspect the DOM
The DOM itself is actually a JavaScript object structure stored in memory, but browsers show it to you as markup that looks very much like HTML.
You can see this HTML representation of the DOM by right-clicking on a page and selecting Inspect > Elements. This is called the Elements panel. I’ve outlined it in the red box below:
In the Elements panel inside DevTools, you can:
Expand and collapse nodes to explore the structure.
Search for specific elements using Ctrl+F on a PC or Cmd+F on Mac within the Elements panel.
See which elements have been added or modified by JavaScript (they often flash briefly when changed).
Note that DevTools doesn’t necessarily show you what Googlebot sees. I’ll circle back to what that means later in this article.
To understand why the DOM often looks different from your HTML file, you first need to understand how the browser creates it. That begins with your browser building the DOM tree.
Building the DOM tree
When your browser requests a page, the server sends back an HTML file. The browser reads this response line by line and translates it into “tokens” (tags like <html>, <body>, <div>).
These tokens are then converted into distinct “nodes,” which serve as the building blocks of the page. The browser links these nodes together in a parent-child hierarchy to form the tree structure.
You can visualize the process like this:
It’s important to know that the browser simultaneously creates a tree-like structure for CSS, known as the CSS Object Model (CSSOM), which allows JavaScript to read and modify CSS dynamically. However, for SEO, the CSSOM matters far less than the DOM.
JavaScript execution
JavaScript often executes while the tree is still being built. If the browser encounters a <script> tag (without defer or async attributes, which allow for the script to load asynchronously), it pauses construction, runs the script, and then finishes building the tree.
During this execution, scripts can modify the DOM by injecting new content, removing nodes, or changing links. This is why the HTML you see in View Source often looks different from what you see in the Elements panel.
Here’s an example of what I mean. Each time I click the button below, it adds a new paragraph element to the DOM, updating what the user sees.
Your HTML is the starting point, a blueprint, if you will, but the DOM is what the browser builds from that blueprint.
Once the DOM is created, it can change dynamically without ever touching the underlying HTML file.
Modern search engines, such as Google, render pages using a headless browser (Chromium). This means that they evaluate the DOM rather than just the HTML response.
When Googlebot crawls a page, it first parses the HTML, then uses the Web Rendering Service to execute JavaScript and take a DOM snapshot for indexing.
The process looks like this:
However, there are important limitations to understand and keep in mind for your website:
Googlebot doesn’t interact like a human. While it builds the DOM, it doesn’t click, type, or trigger hover events, so content that appears only after user interaction may not be seen.
Other crawlers may not render JavaScript at all. Unlike Google, some search engines and AI crawlers only process the initial HTML response, making JavaScript-dependent content invisible.
Looking ahead to a world that’s becoming more AI-dependent, AI agents will increasingly need to interact with websites to complete tasks for users, not just crawl for indexing.
These agents will need to navigate your DOM, click elements, fill forms, and extract information to complete their tasks, making a well-structured, accessible DOM even more critical than ever.
Verifying what Google actually sees
The URL inspection tool in Google Search Console shows how Google renders your page’s DOM, also known in SEO terms as the “rendered HTML,” and highlights any issues Googlebot might have encountered.
This tool is crucial because it reveals the version of the page Google indexes, not just what your browser renders. If Google can’t see it, it can’t index it, which could impact your SEO efforts.
In GSC, you can access this by clicking URL inspection, entering a URL, and selecting View Crawled Page.
The panel below, marked in red, displays Googlebot’s version of the rendered HTML.
If you don’t have access to the property, you can also use Google’s Rich Results Test, which lets you do the same thing for any webpage.
The shadow DOM is a web standard that allows developers to encapsulate parts of the DOM. Think of it as a separate, isolated DOM tree attached to an element, hidden from the main DOM.
The shadow tree starts with a shadow root, and elements attach to it the same way they do in the light (normal) DOM. It looks like this:
Why does this exist? It’s primarily used to keep styles, scripts, and markup self-contained. Styles defined here cannot bleed out to the rest of the page, and vice versa. For example, a chat widget or feedback form might use shadow DOM to ensure its appearance isn’t affected by the host site’s styles.
I’ve added a shadow DOM to our sample page below to show what it looks like in practice. There’s a new div in the HTML file, and JavaScript then adds a div with text inside it.
When rendering pages, Googlebot flattens both shadow DOM and light DOM and treats shadow DOM the same as other DOM content once rendered.
As you can see below, I put this page’s URL into Google’s Rich Results Test to view the rendered HTML, and you can see the paragraph text is visible.
Technical best practices for DOM optimization
Follow these practices to ensure search engines can crawl, render, and index your content effectively.
Load important content in the DOM by default
Your most important content must be in the DOM and appear without user interaction. This is imperative for proper indexing. Remember, Googlebot renders the initial state of your page but doesn’t click, type, or hover on elements.
Content that is added to the DOM only after these interactions may not be visible to crawlers. One caveat is that accordions and tabs are fine as long as the content already exists in the DOM.
As you can see in the screenshot below, the paragraph text is visible in the Elements panel even when the accordion tab has not been opened or clicked.
Use proper <a> tags for links
As we all know, links are fundamental to SEO. Search engines look for standard <a> tags with href attributes to discover new URLs. To ensure they discover your links, ensure the DOM shows real links. Otherwise, you risk crawl dead ends.
You should also avoid using JavaScript click handlers (e.g., <button onclick="...">) for navigation, as crawlers generally won’t execute them.
Like this:
Use semantic HTML structure
Use heading tags (<h1>, <h2>, etc.) in logical hierarchy and wrap content in semantic elements like <article>, <section>, and <nav> that correctly describe the site’s content. Search engines use this structure to understand pages.
A common issue with page builders is making DOMs full of nested <div> elements without semantic meaning. This does little to help search engines understand your page and sets up problems for you or future devs trying to maintain the code on your site.
Ensure to maintain the same semantic standards you’d follow in static HTML.
Keep the DOM lean, ideally under ~ 1,500 nodes, and avoid excessive nesting. Remove unnecessary wrapper elements to reduce style recalculation, layout, and paint costs.
Here’s an example from web.dev of excessive nesting and an unnecessarily deep DOM:
A workable understanding of the DOM can help you not only diagnose SEO issues, but also effectively communicate with developers and others on your team.
We know that the DOM impacts Core Web Vitals, crawlability, and indexing. As AI agents increasingly interact with websites, DOM optimization becomes more critical. It’s important to master these fundamentals now to stay ahead of evolving search and AI technologies.
There’s a growing problem in SEO and content marketing that doesn’t get talked about enough: everything is starting to sound the same. The same phrasing and structure, the same bland tone, the same safe language, the same robotic rhythm.
The web is filling up with perfectly optimized content that no one actually enjoys reading. And that’s the real risk. Not that AI will replace SEOs, Google will penalize AI content, or automation will destroy search.
The real danger is that brands lose their voice, their personality, and their identity in the name of efficiency.
AI should make your SEO better, not blander. Faster, not flatter. Scalable, not soulless.
Here’s how to use AI without turning your brand into beige wallpaper — and without losing what makes it worth ranking in the first place.
AI works best when it supports strategy
AI doesn’t replace a marketing plan, positioning model, or clear brand direction. It supports them. In the same way that tools like Google Analytics, Semrush, and Screaming Frog help you understand what’s happening, AI helps you work more efficiently and supports thinking.
If your SEO strategy is simply, “We use AI,” you don’t have a strategy. You have a software subscription. Without a clear understanding of your audience, what they care about, the problems they’re trying to solve, how they speak, what tone they respond to, and what your brand stands for, AI will just produce generic content at scale.
AI is genuinely good at certain parts of SEO, particularly areas that rely on scale, structure, and data processing. These include:
Analyzing large data sets.
Grouping keywords by intent.
Spotting patterns in SERPs.
Identifying content gaps.
Mapping topics.
Supporting internal linking.
Handling repetitive technical tasks.
This is where AI earns its place. It handles repetitive manual work, speeds up research, reduces basic human error, and helps teams operate more consistently at scale. None of that is threatening. It’s simply practical.
Used properly, AI removes friction from SEO work and gives teams more space to focus on strategy and decision-making. The problems begin when people expect AI to execute SEO work it isn’t built for, treating it as a shortcut rather than a support system. When used this way, the output inevitably falls short of expectations.
AI struggles with the parts of marketing that build trust. Emotional intelligence, cultural awareness, tone, humor, empathy, and genuine understanding are difficult for it to replicate. It doesn’t truly grasp brand positioning, long-term thinking, or commercial judgment, and it can’t make ethical decisions in any meaningful way.
It can copy patterns, but it doesn’t understand meaning. It can recreate tone, but it doesn’t feel it. It can build structure, but it doesn’t create identity.
That’s why so much AI content feels fine but ultimately forgettable. It does the job, ticks the boxes, answers the question, follows SEO rules, and hits the word count. But it doesn’t create a connection that turns traffic into trust, and trust into customers.
The biggest risk with AI in SEO isn’t penalties or algorithm changes. It’s gradual brand dilution. Over time, content becomes more neutral, more generic, and less distinctive.
Visibility may stay the same, but identity weakens. Traffic grows, but loyalty doesn’t. Performance looks healthy, but trust doesn’t compound.
AI should handle structure, humans should handle soul
Effectively using AI in SEO requires role clarity. Let AI handle the structure and scale, but keep meaning firmly in human hands.
AI is well-suited to researching, analyzing, clustering, outlining, drafting frameworks, data processing, repetitive optimizing, and detecting patterns. These are process-driven tasks where automation adds real value.
However, everything that defines the brand and the relationship with the audience — voice, tone, storytelling, personality, trust building, emotional connection, commercial messaging, ethical judgment, and real audience understanding — should remain a human endeavor.
AI can help you build faster, but it shouldn’t decide what you’re building. It supports the process, but the design still belongs to you.
If you don’t define your brand voice, AI will default to something neutral and generic. That doesn’t happen because the technology is broken. It happens because you haven’t given it anything clear to work with.
Before using AI for content, clarify:
Who you’re speaking to.
How you speak.
The language you use and avoid.
The tone you adopt.
The personality you want to project.
The values you stand for.
The boundaries you won’t cross.
Many people assume better prompts can fix weak content. But prompts, no matter how detailed, don’t replace thinking, brand clarity, audience understanding, or positioning.
You can write the most detailed prompt in the world, but if your brand identity is fuzzy, the output will still be fuzzy. AI amplifies whatever you input, whether that’s clarity or chaos. There’s no middle ground.
Practical ways to use AI without losing your voice
Here’s what works in the real world and not just in tool demos.
Use AI for research: Let it gather data, insights, SERP patterns, questions, clusters, topics, and gaps. Then write the content yourself or heavily edit it.
Use AI to create frameworks: Outlines, structures, and content maps are perfect AI jobs.
Train AI on your tone: Feed it examples of your writing, content, emails, site copy, and brand language. But still treat outputs as drafts and not finals.
Human edit everything: Your job is to brand edit. Does this sound like us? Would we say this? Would our customers recognize this voice? Does this feel human?
Protect your commercial pages: Blogs are one thing, but core service pages, product pages, and brand pages should always be human-led. These pages define your business identity.
Use AI to scale consistency, not sameness: Consistency is brand clarity. Sameness is brand death.
Google doesn’t care whether content is AI-generated. It evaluates whether the content is useful, helpful, original, trustworthy, and valuable.
Low-quality human content gets punished. Low-quality AI content gets punished. High-quality content wins, regardless of who or what created it.
The myth that “AI content gets penalized” misses the point. What actually gets penalized is bad content, and AI simply makes it easier to produce bad content faster.
The brands that will lead SEO over the next few years won’t be the ones with the biggest AI tech stacks. They’ll be the ones that combine human strategy with AI efficiency, clear positioning with scalable systems, and strong brand voice with intelligent automation. They’ll use AI to move faster, but not to think for them.
Brands with clarity and identity will strengthen their position. Brands without them will simply become louder without standing out.
Every once in a while, a product launch doubles as a marketing masterclass. Recently, Selena Gomez’s Rare Beauty released a new fragrance, and it wasn’t just the scent that captured attention. It was the bottle. Designed with accessibility in mind, the easy-to-use packaging quickly became the story, sparking conversations and praise from accessibility advocates and consumers alike.
The takeaway for marketers is hard to miss. An inclusive design decision became the campaign itself, delivering more cultural impact than any ad spend could buy. And the lesson for marketers is equally clear: accessibility drives loyalty, enhances brand reputation, ensures compliance, and acts as a measurable growth driver.
Accessibility as a campaign strategy
Rare Beauty’s commitment to accessibility wasn’t a one-off. From packaging to pricing to its ongoing mental health advocacy, the brand has consistently embedded inclusivity into its DNA. That authenticity matters. Consumers can tell the difference between a stunt and a strategy, and they reward brands that lead with values.
And Rare Beauty isn’t alone. Across industries, leading brands are increasingly surfacing accessibility as a differentiator, not a footnote. Apple has consistently highlighted accessibility features as part of its core product storytelling, positioning them as innovation rather than accommodation. Microsoft has done the same by showcasing inclusive design in mainstream campaigns, including adaptive gaming products that reframed accessibility as a driver of creativity and connection. In fashion and retail, brands like Tommy Hilfiger and Unilever have brought adaptive design into the spotlight, integrating accessibility into product launches and brand identity rather than siloing it as a niche offering.
According to studies from Edelman and McKinsey, 73% of Gen Z choose to buy from brands they believe in, and 70% say they try to purchase products from companies they consider ethical. These aren’t fringe preferences, they’re mainstream expectations that can redefine how marketers approach building trust and growth with their audiences.
The $18 trillion market marketers overlook
More than 1.3 billion people globally live with a disability, and together with their friends and family, they control over $18 trillion in spending power, according to the Return on Disability Group. For marketers, this isn’t just about compliance. It’s about growth, reputation, and building genuine trust in one of the world’s largest and most passionate consumer groups. That passion translates to powerful advocacy.
In discussions with AudioEye’s A11iance Team, a group of individuals with disabilities who regularly share feedback on real-world accessibility experiences, one member stated, “If I find a website that works and works very well for me, I will always recommend it to friends and family because I want people to have the same experience that I have.”
As another A11iance Team member, Maxwell Ivey, put it, “The cheapest form of advertising is word of mouth, and people with disabilities can have some of the loudest voices when we find people willing to make the effort. Because it’s that sincere effort over time that really counts with us.”
When accessibility becomes part of the customer experience, it creates something money can’t buy: trust and loyalty that scale through advocacy. But the opposite is also true. In a survey of assistive technology users, 54% said they don’t feel eCommerce companies care about earning their business.
Most brands are still competing for the same oversaturated demographics while overlooking this opportunity hiding in plain sight. In doing so, they’re leaving loyalty, advocacy, and revenue on the table.
Here’s where many brands stumble: accessibility usually stops at the shelf. Marketers invest heavily in packaging, store displays and product design, while digital experiences, the first and often primary touchpoint for customers, lag behind.
As accessibility-led design continues to earn attention, loyalty and earned media, the gap between physical product innovation and digital experience has become harder to ignore.
AudioEye’s 2025 Digital Accessibility Index found an average of 297 accessibility issues per web page detectable by automation alone. Each one represents friction in the customer journey, a conversion lost, or a compliance risk under frameworks like the Americans with Disabilities Act (ADA) and the European Accessibility Act (EAA).
Just as no campaign would launch without a brand review or legal check, no digital touchpoint should go live without an accessibility review.
Four moves marketing leaders can make
Too often, accessibility is treated as a risk to manage instead of an advantage to leverage. The marketers who win will be the ones who flip that script. Here are four actions to start with.
1. Make accessibility your campaign hook
Don’t hide it, lead with it. Brands like Rare Beauty have proved that inclusive design is the story. Build campaigns where accessibility isn’t a footnote but the differentiator that captures attention and loyalty.
2. Bake it into your brand system
Accessibility shouldn’t sit off to the side. Make Web Content Accessibility Guidelines (WCAG) alignment part of your brand guidelines, right alongside typography, logos and tone of voice. When accessibility is codified, it becomes second nature across every campaign.
3. Use data as your proof point
Marketers are storytellers, and numbers seal the story. Track accessibility improvements such as fewer user-reported barriers, higher accessibility scores and fixes like improved alt text, color contrast or form usability. Connect those metrics to existing business outcomes like conversion, reach, and sentiment to show how accessibility drives ROI, not just compliance.
4. Protect accessibility like brand safety
Just as you’d never risk brand safety in ad placements, don’t risk it in your digital touchpoints. Every update, seasonal campaign, or product drop should be monitored for accessibility. Trust and reputation are too valuable to leave exposed.
The Competitive Advantage
Rare Beauty’s fragrance launch proved something powerful: when you lead with accessibility, the story writes itself. The loyalty builds authentically, and the momentum flows naturally.
But here’s the opportunity: most brands still don’t get it. They’re treating accessibility as a compliance checkbox instead of the growth strategy it really is.
For marketers, that’s the wake-up call. Accessibility builds loyalty. It enhances brand reputation. It keeps your brand compliant. And it drives measurable growth across marketing efforts.
Rare Beauty showed how accessibility can capture attention at the shelf. The next opportunity is making sure it carries through online. Because when every touchpoint welcomes everyone, every campaign maximizes its impact.
Google is rolling out an update to AI Mode for recipe results that it hopes will make recipe bloggers happy. Google’s Robby Stein said on X, “We’ve heard feedback on recipe results in AI Mode, and we’re making updates to better connect people with recipe creators on the web.”
The changes aim to make it easier to click over to recipe sites, though I am not 100% certain yet whether the recipe summaries turn recipes into AI slop.
“Starting today, when you search for meal ideas like “easy dinners for two,” you can tap on the dish to see links to relevant recipe sites, plus a short overview of the dish to help with inspiration,” Stein added.
What it looks like. Here is a video of it in action:
More recipe details too. Google is also adding more information to the recipe results including cook time. Google said its “testers have found useful for deciding on a recipe.”
“We know there’s more work to be done on this, so stay tuned for future updates,” Robby Stein added.
Why we care. Recipe bloggers, well, content creators in general, have not been happy with how traffic from Google’s AI experiences did not send as much traffic as the traditional search results. Here we see Google trying to make changes to encourage more searchers to click from those AI experiences to the bloggers website.
Google is investigating a disruption affecting Google Ad Manager, according to an update posted on the Google Ads Status Dashboard.
The incident began at 13:49 UTC on March 4. By 13:54 UTC, Google said it was reviewing reports that some users could access Ad Manager but weren’t seeing the most up-to-date data.
What’s happening. The issue appears to impact reporting consistency. Specifically, Ad Exchange match rate and Ad Exchange request values are not aligning between Ad Manager’s interactive reports and the legacy reporting query tool (now deprecated).
Why we care. Reporting discrepancies in Google Ad Manager can directly impact how you evaluate performance and optimize campaigns. If Ad Exchange match rates and request data don’t align across reporting tools, it becomes harder to trust the numbers driving pacing, forecasting and revenue decisions.
What it means. Users can still log into Ad Manager, but reporting discrepancies may affect data accuracy — at least temporarily. There’s no indication yet of a full outage, but for publishers and advertisers relying on real-time reporting, mismatched metrics could complicate performance monitoring and optimization decisions.
What’s next. Google says it’s actively investigating and will provide further updates. In the meantime, affected users are advised to monitor the status dashboard and contact support if they’re experiencing issues not listed there.
Google introduced a new availability value in Google Merchant Center — built specifically for vehicle sellers who don’t carry every model on the lot. The new attribute, “build to order,” lets dealers flag vehicles that aren’t physically in inventory but can be customized and ordered by customers.
What needs to change. Sellers must update two areas: their structured data (set availability to BuildToOrder) and their Merchant Center feed (set availability to build to order). Consistency between structured data and feed submissions is critical to avoid disapprovals.
Instruction on when to use the availability [availability] attribute in GMC
Why we care. Until now, sellers had limited ways to signal that a vehicle wasn’t available for immediate pickup. The new value better reflects how many modern automakers operate — especially direct-to-consumer brands like Tesla and Rivian, where buyers configure features before production. For dealers offering factory orders or custom builds, this means clearer expectations for shoppers — and cleaner data for Google.
The fine print Vehicles marked “build to order” must have the condition attribute set to “new.” If a listing is marked “used,” it will be disapproved — Google considers build-to-order vehicles to be newly configured, not pre-owned.
Bottom line If you sell customizable or factory-order vehicles, this update gives you a more accurate way to reflect availability — but only if your feed, structured data and condition fields are properly aligned.
First spotted. This update was shared by Google Shopping specialist Emmanuel Flossie, where he shared how to implement this update on his blog.
PPC platforms are asset-hungry. What began as simple text ads and keyword bidding has evolved into an AI-driven ecosystem.
Tools inside Google Ads can now remove backgrounds, generate lifestyle scenes, and even create synthetic humans in minutes. But just because the technology allows it doesn’t mean every brand should use it.
That shift forces PPC advertisers to confront difficult questions:
Are you willing to trade efficiency for authenticity?
How far up the stack should your brand let AI operate?
If clients knew exactly where and how you were using AI, would they trust you, or would they question you?
A brand integrity hierarchy offers a way to navigate those decisions — a four-level framework that helps determine how much AI manipulation your brand, industry, and audience can tolerate.
Why PPC needs its own AI ethics framework
Generic AI ethics guidelines don’t account for the operational realities of paid search. PPC isn’t a brand storytelling channel. It’s a high-volume, high-velocity system that demands constant image production across dozens of audiences, formats, and placements.
You must generate fresh lifestyle imagery at a pace traditional creative workflows can’t sustain.
At the same time, Google and Bing enforce strict policies around accurate product representation, especially in Merchant Center, where even minor visual inaccuracies can trigger disapprovals or account risk.
Layer on top of that the platform pressure. Google Ads added Nano Banana Pro, turning Asset Studio into an AI co-creation environment. Performance Max actively pushes you toward AI-generated backgrounds, variations, and lifestyle images to improve performance. Demand Gen and Merchant Center also now have capabilities to change product images at scale.
Most brands can’t afford the photoshoots required to keep up with this demand, yet the volume and placement of images across channels make them unavoidable if you want to compete.
This combination of policy risk, creative pressure, and platform-promoted tools is unique to PPC — which is exactly why the industry needs its own AI ethics framework.
PPC context: This level is fully compliant with Google and Microsoft’s “accurate representation” policies. Merchant Center explicitly permits technical edits that don’t alter the product itself. This is the safest zone for regulated industries such as finance, healthcare, legal services, and brands with strict authenticity standards.
Client talk-track: “We’re using AI to make your reality look its best on every screen size. We aren’t changing what the product is, only how it’s displayed.”
Risk assessment: Zero brand risk. Zero policy risk. Maximum consumer trust.
I think about Level 1 the same way I think about working with a graphic designer in Photoshop. You’re not changing the product, the setting, or the truth — you’re simply cleaning up what already exists.
This level is about technical refinement, not creative invention. It’s the equivalent of adjusting lighting, removing dust, fixing a crooked crop, or correcting color balance. Nothing about the image becomes “untrue.” You’re enhancing reality, not altering it.
Level 2 – The inner ring (low risk): Contextual narrative
Definition: AI-generated environment, not AI-generated product.
Permitted activities:
Generative backgrounds (e.g., placing a watch on a mountain backdrop).
Removing visual distractions (e.g., power lines, litter, unrelated objects).
Seasonal or thematic settings (e.g., holiday scenes, office environments).
Generic commodity generation (e.g., coffee beans, grain, raw materials, not branded products).
Google Ads context: Performance Max’s AI background generation is designed for this level. Google allows contextual enhancement as long as the product remains unchanged. This approach is useful for scaling creative variations without expensive location shoots or studio rentals.
Risks:
Cultural mismatch. AI-generated settings may not reflect the target audience’s reality.
Unrealistic or off-brand environments.
Requires human review for brand consistency.
Client talk-track: “We’re using AI to build a world for your product to live in. The product the customer receives is identical to the one in the ad.”
Level 2 sits in an odd psychological space. The manipulations themselves are still low-risk. You’re creating scenes, composites, or enhanced environments the same way a graphic designer would in Photoshop.
Brands have been doing this manually for decades. But the moment AI performs the same task, something shifts. To customers, and even to some advertisers, the exact same edit can feel more artificial simply because an algorithm did it instead of a human.
That perception gap matters.
Even when the output is identical, AI-assisted scene creation can trigger a sense of “this looks fake” that traditional Photoshop work never did. It’s irrational, but it’s real and worth acknowledging at this second tier. The actual risk is still low, but the emotional risk is higher than Level 1.
Level 3 – The outer ring (high risk): Subject augmentation
Definition: Altering the “hero” — the product or the person.
Activities:
Beautification filters on models.
Slimming or reshaping human subjects.
Altering food textures to appear more appealing.
Removing “imperfections” from products.
Making products appear more premium than they are.
PPC industry context: The platforms prohibit misleading or manipulated product imagery. Merchant Center disapprovals often occur at this level. High sensitivity exists in beauty, apparel, food, and health categories, where consumer expectations are tied directly to visual accuracy.
Recent consumer trust studies show that users feel deceived when they discover product images have been significantly altered. This is a policy concern, more so a brand reputation issue.
Half of U.S. adults (51%) believe AI-generated and edited content needs better labeling, CNET reports. One in five (21%) believe AI content should be prohibited on social media with no exceptions.
Risks:
High PR risk (e.g., press call-out moments).
High policy risk (e.g., disapprovals, account suspension).
High consumer trust risk (e.g., returns, negative reviews).
Client talk-track: “This is where we risk the ‘press call-out.’ If we remove a model’s birthmark or make a burger look like a 3D render, we aren’t optimizing — we’re fabricating.”
Risk assessment: High brand risk. High policy risk. Potential for long-term damage to consumer trust.
Level 3 moves into territory where the image no longer reflects the real person or product. And yes, brands have been doing this in Photoshop for years, and they’ve been called out for it just as long. There’s precedent, and there’s backlash.
What changes at Level 3 is scale. AI lets you make edits instantly, repeatedly, and across entire product catalogs or campaigns. The ethical risk isn’t new, but the volume and speed at which AI enables these distortions make the consequences far bigger. A single questionable Photoshop edit is one thing. Hundreds of AI-altered images pushed across every channel is something else entirely.
This is where the risk stops being theoretical and starts becoming reputational — and where paid search teams need a clarified stance.
Level 4 – The edge (critical risk): Full fabrication
Definition: Synthetic humans, synthetic products, or fully AI-generated scenes.
Activities:
AI-generated models.
Virtual influencers.
Products that don’t exist.
Entirely fabricated lifestyle scenes with no real-world basis.
PPC context: Synthetic humans are allowed in some formats with proper disclosure, but Merchant Center prohibits listing products that don’t exist. There is a high risk of disapproval for “inaccurate representation.” This level may be acceptable for creative testing or conceptual campaigns, but it’s dangerous as a primary brand identity.
Legal precedents regarding copyright protection for non-human-authored creative works remain murky. Using fully synthetic assets may cause challenges if ownership disputes arise or if synthetic models are mistaken for real individuals without proper disclosure.
Risks:
Maximum brand risk.
Maximum policy risk.
Maximum consumer trust risk.
Potential long-term damage to “trust equity.”
Client talk-track: “This is for high-speed testing or fringe creative. If we use this for our main brand identity, we must be prepared for the ‘inauthentic’ label.”
Risk assessment: Critical brand risk. Critical policy risk. Use with extreme caution and full disclosure.
Level 4 is where AI stops enhancing reality and starts inventing it. The image becomes a construction. While I haven’t personally worked with brands operating at this tier, it’s absolutely where the industry could be headed, and it deserves serious consideration.
Fully fabricated imagery can mislead customers, violate platform policies, and erode trust at scale. When AI creates people, products, or environments from scratch, the line between creative expression and consumer deception becomes razor-thin. The reputational fallout from getting this wrong is far greater than anything in Levels 1 through 3.
This is the highest-risk tier because it asks a fundamental question: Are you still advertising your product or an AI-generated fiction of it?
Brand alignment: Defining your North Star
Not every brand should operate at the same level of the brand integrity scale. Your acceptable AI usage depends on four factors.
1. Define your non-negotiables
Every brand must choose its acceptable level(s) on the scale and document it in a brand AI manifesto for PPC.
Examples:
Dove (authenticity-driven beauty brand): Level 1 only.
Tech-forward DTC brand: Levels 2-3 acceptable with clear disclosure.
Ecommerce aggregator: Levels 1-2 for product listings, Level 3 for lifestyle content.
Action: Create a PPC brand AI manifesto in collaboration with creative, legal, and executive leadership.
2. The press test vs. the policy test
Two critical questions should guide every AI decision:
Policy test: “Will the platform approve this?”
Press test: “Would we be proud if The Verge covered this?”
The press test is the real guardrail. Google’s policies change. Public perception is permanent.
3. Human-in-the-loop protocol
Every AI-assisted asset must be checked for:
Material deception: Does this misrepresent the product or service?
Identity erasure: Does this erase diversity or cultural authenticity?
Cultural hallucinations: Does this AI-generated scene reflect reality or stereotype?
Product accuracy: Does the ad show what the customer will actually receive?
Automated AI generation should never bypass human review, especially in regulated verticals.
4. Align with your customer base
Different audiences have different tolerances for AI manipulation:
Gen Z: Values “perfectly imperfect” authenticity. Responds negatively to over-polished imagery.
B2B: Prioritizes clarity and utility. AI-generated backgrounds are acceptable. Synthetic humans less so.
Retail: Authenticity directly impacts conversion rates. Product accuracy is non-negotiable.
Operationalizing the brand integrity circle inside PPC ads
Creative workflow
Implement a pre-flight checklist for AI-generated assets:
Identify the level: Core, inner ring, outer ring, or edge
Apply the press test: Would we defend this publicly?
Check for bias: Does this asset represent your audience accurately?
Verify product accuracy: Is this what the customer will receive?
Document disclosure: If synthetic humans are used, is this disclosed?
Media workflow
Safe placements for AI-generated assets
Performance Max (with contextual backgrounds).
Demand Gen (lifestyle scenes).
YouTube thumbnails (conceptual creative).
Unsafe placements
Merchant Center product images (Level 1 only).
Regulated verticals (finance, healthcare, legal).
Sensitive categories (beauty, weight loss, medical devices).
Legal workflow
Legal teams should:
Review synthetic human usage for disclosure compliance.
Validate product accuracy claims.
Approve the brand AI manifesto.
Maintain documentation for regulatory audits.
Industry standards and emerging frameworks, such as the Coalition for Content Provenance and Authenticity (C2PA), are establishing transparency protocols for AI-generated media. Monitor these developments and align your practices accordingly.
What the PPC community thinks
Some PPC professionals are already experimenting with the tools discussed in this framework.
Ameet Khabra, owner of Hop Skip Media, tested Nano Banana when it first appeared inside the Google Ads interface. She found the tool useful for ideation and quick edits, but noted that strong results often required highly specific prompts.
That level of prompt detail may be realistic for experienced advertisers, but it’s less likely for many SMBs experimenting with AI-generated assets.
“I think it’s a great tool to use for ideation and potentially quick edits,” Khabra said. “But I would still have a graphic designer creating the final product.”
Even when AI imagery is available, some advertisers remain skeptical of how it appears to audiences.
Julie Friedman Bacchini, owner of Neptune Moon, says AI-generated images often look noticeably artificial.
“I don’t like AI images because they look like AI and that’s off-putting to me,” Bacchini said. “It can be hard to avoid. Even when you’re trying to use stock photos, there are so many AI images on those sites too.”
To understand how people outside the industry view these changes, I also polled the community on Threads.
The sentiment was strikingly consistent: while the industry focuses on efficiency, the public is increasingly wary of fantasy versus reality.
One commenter wrote:
“False advertising. That seems like a pretty big concern. As a consumer, I actually would like to see the real thing I’d be buying.”
Another described the issue more bluntly:
“Bait and switch. Fantasy versus reality. Falsehood versus the truth.”
AI isn’t inherently deceptive. Nor is it inherently transparent. It’s a tool. Like all tools, its ethical impact depends on how it’s used. As PPC experts with access to these technologies and advisory roles with brands, we need a clear point of view to guide these decisions.
The brand integrity scale outlined above provides a structured approach to AI use in PPC, helping you navigate the tension between automation and authenticity. By defining your brand’s position on this spectrum today, you ensure tomorrow’s campaigns are remembered for their resonance.
Adopt ethical AI standards — define your brand AI manifesto, implement the press test, and ensure every AI-generated asset passes human review before it reaches your audience. Your brand’s integrity depends on it.
Google has removed the “design for accessibility” section from within the Understand the JavaScript SEO basics documentation. Google said this was removed because the information was “out of date and not as helpful as it used to be.”
The old text said that using JavaScript for page content “may be hard for Google to see.” But Google now says that has not been true for many years, thus why Google removed the section.
The old section. The old section read:
“Design for accessibility: Create pages for users, not just search engines. When you’re designing your site, think about the needs of your users, including those who may not be using a JavaScript-capable browser (for example, people who use screen readers or less advanced mobile devices). One of the easiest ways to test your site’s accessibility is to preview it in your browser with JavaScript turned off, or to view it in a text-only browser such as Lynx. Viewing a site as text-only can also help you identify other content which may be hard for Google to see, such as text embedded in images.”
“The information was out of date and not as helpful as it used to be. Google Search has been rendering JavaScript for multiple years now, so using JavaScript to load content is not “making it harder for Google Search”.”
“Most assistive technologies are able to work with JavaScript now as well.”
Why we care. While Google Search can handle JavaScript super well, it is still important for you to double check what Google Search sees by using the URL inspection tool within Google Search Console.
Keep in mind, Google can handle JavaScript very well, Microsoft Bing likely can as well. But many of the new AI engines might not be able to render JavaScript as well as Google or Bing.
Google is communicating that starting April 1st, Customer Match uploads through the Google Ads API will stop working for certain users, in a message sent to API developers.
Specifically, developers who haven’t uploaded Customer Match data in the past 180 days using their developer token will no longer be able to do so via the Ads API.
What’s changing. If you fall into that inactive bucket, any attempt to upload Customer Match lists through the Google Ads API after April 1 will fail. Instead, Google wants you to move those workflows to the Data Manager API. The change applies only to Customer Match uploads — all other campaign management and reporting tasks should continue as normal in the Google Ads API.
Why Google says it’s doing this. Google positions the Data Manager API as a more modern, unified data ingestion solution across its platforms, with stronger security protocols. It also includes features not available in the Ads API, such as confidential matching and enhanced encryption — signaling a push to centralize and better secure audience data handling.
Why we care. If you or your developers haven’t touched Customer Match uploads in the last six months, this could catch you off guard. After April 1, 2026, the old workflow simply won’t work — and errors will replace uploads.
The takeaway. Check whether your developer token has been used for Customer Match recently and plan a migration to the Data Manager API now, before Google flips the switch.
First spotted. This announcement was shared by Paid Search specialist Arpan Banerjee who shared the message he got from Google on LinkedIn.
Google has long been considered the gold standard for ad spend compared to social platforms. But scale doesn’t equal immunity. Click fraud remains a persistent risk, and the safety of your budget depends entirely on where your ads are running.
While Google Ads offers immense reach, its campaigns aren’t created equal. Some are significantly more exposed to malicious activity than others. To protect your margins, you must understand what constitutes click fraud, where it originates, and how to shield your campaigns.
What are invalid clicks?
Invalid clicks are interactions that lack legitimate consumer intent. Because they aren’t driven by real human interest, they skew performance data and drain budgets without any possibility of conversion. These clicks generally originate from four primary sources:
Botnets: Networks of hijacked devices controlled by a “botmaster.” They generate massive volumes of automated traffic that mimic human behavior to inflate metrics or carry out DDoS attacks.
Click farms: Large groups of low-paid workers or automated scripts tasked with manually clicking ads. They create an illusion of high engagement, misleading brands into believing a campaign is more effective than it truly is.
Ad injection and malware: Malicious software that “injects” unauthorized ads into websites or forcibly redirects users. This hijacks legitimate revenue and erodes consumer trust.
Pixel stuffing and ad stacking: “Invisible” fraud in which ads are served but never seen. Pixel stuffing compresses an ad into a single 1×1 invisible pixel, while ad stacking layers multiple ads on top of each other in a single slot. You pay for thousands of impressions that have zero chance of being viewed.
The average invalid click rate across Google Ads is 11.4%, a recent study by Fraud Blocker found. The figure is climbing.
That upward trend becomes clearer over time. In 2010, the average invalid click rate sat at 5.9%. By 2024, that number jumped to 12.3%. This doubling of fraud is likely driven by the increased sophistication of AI-powered bots and malware that can more effectively bypass basic security filters.
Invalid click rates fluctuate based on your campaign setup. Three key factors typically drive these numbers:
Industry competition: High CPC industries like legal services, insurance, and real estate are primary targets. Competitors or bots may intentionally target these campaigns to exhaust your daily budget.
Targeting parameters: Overly broad keywords or targeting regions known for high botnet activity can inadvertently invite “junk” traffic.
Refinement tools: Negative keywords and audience exclusions act as a shield, reducing the frequency of unintentional clicks.
Campaign hierarchy: Which are the biggest violators?
Not all Google Ads inventory carries the same level of risk. Here’s how campaign types stack up from highest to lowest exposure.
The biggest risk: Google Video Partners
Video Partners show the highest levels of invalid traffic since they reach beyond YouTube to Google’s network of third-party sites.
Many of these sites offer zero control, racking up views from bots or “muted” placements in tiny windows where real people never see them.
Display campaigns: Highly vulnerable
Display ads are often plagued by low-quality “made-for-ads” or AI-built sites.
In some cases, more than half of the clicks on a specific site can be invalid.
While major publishers are safer, the risk across the wider network is uneven and requires constant monitoring.
Shopping and Demand Gen: The automation tax
These campaigns face less malicious fraud from price-checking tools, scrapers, and automated bots.
While not always intended to deplete your budget, these clicks still drain spend and skew optimization data, which is particularly damaging for low-margin businesses.
Performance Max: Hidden exposure
While PMax scales inventory across all Google properties, it spreads risk across the entire ecosystem.
Because it’s difficult to see exactly where traffic originates, even a low percentage of invalid clicks can result in significant wasted spend.
Search: The safest bet
Traditional Search campaigns remain the most secure.
It’s much harder for bots to accurately mimic a human searching for a specific solution.
However, in high CPC industries, even a 2% fraud rate can be financially painful.
How to mitigate the risks
Across the diverse range of industries my clients serve, I’ve identified specific patterns in how fraud manifests across different sectors. As a result, the best prescription is proactive. Address these vulnerabilities by shifting from broad, automated settings to a more refined, high-intent strategy.
The following table highlights the specific patterns we monitor to lower invalid click rates:
Factor
Higher risk (Aggressive)
Lower risk (Strict)
Location
Global or “Presence or Interest”
“Presence Only” (User is physically there)
Keywords
Broad match / Generic terms
Exact match / Long-tail phrases
Networks
Including “Search Partners” and “Display”
Google Search Network only
Exclusions
No negative keywords or placement lists
Robust negative lists and app exclusions
Scheduling
24/7 (Bots often spike at night)
Custom schedules aligned with business hours
Here are proactive steps you can take to reduce your exposure to fraud.
Audit placement data: Regularly review where your ads appear. Immediately exclude low-quality apps or sites that show high click-through rates but zero conversions.
Limit AI Max overreliance: Automation is powerful, but set and forget is a recipe for wasted spend. Maintain manual guardrails on your automated campaigns.
Review refunds: Google does issue refunds for detected fraud, but subtle cases often slip through. Compare your internal lead data against Google’s click data to find discrepancies.
Google is far from a uniform entity. It’s a diverse ecosystem of distinct environments where risk levels can vary by as much as 400%.
Prioritizing high-quality traffic results in superior data integrity, more precise optimization, and reduced acquisition costs. In today’s market, the strategic structure of your campaigns is just as vital to your success as the size of your budget.
One of the most profitable Google Ads targeting tactics is retargeting: showing ads to people who are already familiar with your business. But if you still think that “retargeting” means a Display campaign chasing users around the web with banner ads, you’re missing out on how “Your data segments” actually function today.
Let’s explore how you can leverage your proprietary audience data in new ways, and what mistakes to avoid in 2026 and beyond.
What are “Your data segments” in Google Ads?
Retargeting means showing ads to people who are already familiar with your business. Google uses the euphemistic name “Your data segments” to refer to all the retargeting lists in your account.
What types of retargeting can you do in Google Ads?
A variety of different retargeting methods are available in Google Ads. They mirror what you’ll find on other ad platforms like Meta, LinkedIn, or TikTok. I find it helpful to group them into four categories:
Website Visitors: This is the standard one — people who have visited your website. You collect this data using Google Tag Manager or Google Analytics.
App Users: If you have a mobile app, you can pull data from Firebase or other third-party analytics tools into Google Ads for retargeting.
Customer Match: This is the “holy grail” of retargeting. You take your business’s first-party data (email addresses, phone numbers, etc.) and upload it directly to Google Ads, so that Google can find those same users across its platforms.
Content Engagers: People who have interacted with your content on Google-owned properties. Examples include a segment of users who have watched your YouTube videos, or a segment of users who have clicked through to your site from search results (this is called the Google Engaged Audience, which we explored in another article).
Should you upload “your data segments” if you’re not planning to do retargeting?
Many practitioners overlook this detail: your data segments aren’t just about ad targeting.
Even if you don’t have a single retargeting campaign running, the mere existence of these lists in your account provides a vital signal for Smart Bidding and Optimized Targeting.
For example, when you upload a customer list, you’re telling Google, “These are the people who actually buy from me.” Even if you never add that list to your audience signal in Performance Max, Google will still use it to understand likely converters and adjust bidding/targeting accordingly.
Similarly, let’s say you only run Search and Shopping campaigns, and you use Target ROAS bidding. When Google is trying to set the right bid for the right user at the right time, their presence (or lack thereof) on a “your data segment” list is one of many signals incorporated into that bidding calculation.
How can you use retargeting lists in Google Ads?
Different campaign types handle audience data differently. It’s important to know the distinction so you can plan your targeting strategy accordingly.
Search, Shopping, Display: In these campaigns, you have three options with Your data segments: Targeting, Observation, Exclusion.
Targeting means your ads will only show if the user is a member of your data segment
Observation allows you to see your campaign data segmented by list, without narrowing your reach
Exclusion means your ads will only show if the user is NOT a member of your data segments.
Performance Max and App Campaigns: In these AI-powered campaigns, you can include Your data segments as part of your audience signal. Performance Max recently added the ability to exclude Your data segments as well.
Demand Gen: In Demand Gen, you can Target and Exclude Your data segments, but there is no “Observation” option.
If you’re new to retargeting, I find Demand Gen the best place to start. It’s built for visual storytelling and works well with the Google Engaged Audience or basic website visitor lists.
If you have some experience with retargeting campaigns, you might want to try New Customer Acquisition or Customer Retention mode in PMax or Shopping, as these are powered by Your data segments.
What’s the biggest retargeting mistake to avoid?
Over-segmenting. I know it can be tempting to create 50 different lists: “People who visited the cart on a Tuesday,” or “People who looked at three pages but didn’t click the ‘About’ section.”
Unless you’re spending six figures or more every month, this level of granularity doesn’t help, and may actually hurt your campaigns. Google’s AI needs data density to learn. When you slice your audience into tiny slivers, you don’t have enough “matched records” for the system to optimize.
Upload your unique data to Google Ads, keep your strategy simple, and let the bidding algorithms do the heavy lifting in driving returning customers for you.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.
Lately, I’ve been spending most of my day inside Cursor running Claude Code. I’m not a developer. I run a digital marketing agency. But Claude Code within Cursor has become the fastest way for me to handle many tasks I want to do, including pulling and analyzing data from Google Search Console, GA4, and Google Ads.
The setup takes about an hour. After that, you can ask things like “which keywords am I paying for that I already rank for organically?” and get an answer in seconds instead of spending an afternoon with spreadsheets. (I wouldn’t have been the one spending an afternoon with spreadsheets anyway, but now nobody has to.)
Here’s the step-by-step process I developed while analyzing data for our agency clients. If this looks too technical, paste the URL of this article into Claude and ask it to walk you through it step by step.
What you’re building
What you end up with is a project directory where Claude Code has access to Python scripts that pull live data from your Google APIs. You fetch the data, it lands in JSON files, and then you just talk to it.
No dashboards to build. No Looker Studio templates to maintain. You’re basically giving Claude Code the same data your team would look at, and letting it do the cross-referencing.
seo-project/
├── config.json # Client details + API property IDs
├── fetchers/
│ ├── fetch_gsc.py # Google Search Console
│ ├── fetch_ga4.py # Google Analytics 4
│ ├── fetch_ads.py # Google Ads search terms
│ └── fetch_ai_visibility.py # AI Search data
├── data/
│ ├── gsc/ # Query + page performance
│ ├── ga4/ # Traffic by channel, top pages
│ ├── ads/ # Search terms, spend, conversions
│ └── ai-visibility/ # AI citation data
└── reports/ # Generated analysis
Everything runs through a Google Cloud service account. One service account covers both GSC and GA4, which is nice. Google Ads needs its own OAuth setup, which is less nice but manageable.
Service account (for GSC + GA4)
Create a project in Google Cloud Console.
Enable the Search Console API and Google Analytics Data API.
Create a service account under IAM & Admin > Service Accounts.
Download the JSON key file.
Add the service account email as a user in your GSC property (read access is enough).
Add it as a Viewer in your GA4 property.
The service account email looks like your-project@your-project-id.iam.gserviceaccount.com. You’ll add this email address to each client’s GSC and GA4 properties, same way you’d add any team member.
For agencies: one service account works across all clients. Add it to each property, update a config file with the property IDs, and you’re set.
Google Ads authentication
Google Ads is different. You need:
A developer token from the Google Ads API Center (under Tools & Settings > Setup > API Center).
OAuth 2.0 credentials from Google Cloud (not the service account, a separate OAuth client).
A one-time browser authentication to generate a refresh token.
The developer token requires an application. For agency use, describe it as “automated reporting for marketing clients.” Approval usually takes 24-48 hours.
If you’re using a Manager Account (MCC), one developer token and one refresh token cover all sub-accounts. You just change the customer ID per client.
If you don’t have API access or MCC, maybe it’s a new client and you’re still getting set up, you can skip the API entirely. Download 90 days of keyword and search terms data as CSVs from the Google Ads UI, drop them in your data directory, and Claude Code will work with those just as well. That’s how we handle clients who aren’t in our MCC yet.
Install the Python dependencies
All the examples below assume you’re working in the terminal on a Mac or Linux machine. If you’re on Windows, the easiest path is Windows Subsystem for Linux (WSL).
Each fetcher is a short Python script that authenticates, pulls data, and saves JSON. I didn’t write these from scratch. I described what I wanted to Claude Code and it wrote them.
One thing that genuinely surprised me: I never had to read the API documentation. Not for GSC, GA4, or Google Ads.
I’d say something like “I want to pull the top 1,000 queries from Search Console for the last 90 days,” and Claude Code would figure out the authentication, endpoints, and query parameters. It already knows these APIs. You just tell it what data you want.
Google Ads uses something called Google Ads Query Language (GAQL). If you’ve ever written a SQL query, this will look familiar. If you haven’t, don’t worry, Claude Code will write it for you:
from google.ads.googleads.client import GoogleAdsClient
client = GoogleAdsClient.load_from_storage("google-ads.yaml")
ga_service = client.get_service("GoogleAdsService")
query = """
SELECT
search_term_view.search_term,
metrics.impressions,
metrics.clicks,
metrics.cost_micros,
metrics.conversions
FROM search_term_view
WHERE segments.date DURING LAST_30_DAYS
ORDER BY metrics.impressions DESC
"""
response = ga_service.search(customer_id="1234567890", query=query)
This pulls the same data as the Search Terms report you’d download from the Google Ads UI: impressions, clicks, cost, conversions, match type, campaign, and ad group.
So now you’ve got JSON files from GSC, GA4, and Ads sitting in your project directory. Claude Code can read all of them at once and answer questions that would normally mean a lot of tab-switching and VLOOKUP work.
The paid-organic gap analysis
The single most valuable question I’ve found:
“Compare the GSC query data against the Google Ads search terms. Find keywords where we’re paying for clicks but already have strong organic positions. Also, find keywords where we’re spending on ads with zero organic visibility. Those are content gaps.”
When I ran this for a higher education client, it identified:
2,742 search terms with wasted ad spend (impressions, zero clicks).
351 opportunities to reduce paid spend on terms where organic was already strong.
33 high-performing organic queries that paid could amplify.
41 content gaps where paid was the only presence (no organic).
That analysis took about 90 seconds. The equivalent manual process (downloading CSVs from GSC and Ads, VLOOKUPing across them, categorizing the overlaps) takes most of an afternoon.
Other questions worth asking
Once you have GSC + GA4 + Ads data loaded:
“Which pages get the most impressions in GSC but have low CTR? What’s the traffic from GA4 for those same pages?” (identifies meta description/title opportunities)
“What are the top 20 organic queries by impression that we’re not running ads against?” (paid amplification candidates)
“Group the GSC queries by topic cluster and show me which clusters have the most impressions but lowest average position.” (content investment priorities)
“Which pages in GA4 have high bounce rates but strong GSC positions? Those might need content improvement.”
Claude Code isn’t doing anything a human couldn’t do with spreadsheets. It’s doing it in seconds, and you can follow up with another question without rebuilding the whole analysis from scratch.
Step 5: Add AI visibility tracking
Traditional SERP positions aren’t the whole picture anymore. Between Google’s AI Overviews, AI Mode, Copilot, ChatGPT, and Perplexity, you need to know whether AI systems are citing your content.
This is especially true in verticals like higher education, where prospective students increasingly start their research in AI search tools.
If you have a tracking platform
Tools like Scrunch, Semrush’s AI Visibility toolkit, or Otterly.ai will track your brand’s presence across ChatGPT, Perplexity, Gemini, Google AI Overviews, and Copilot.
Export the data as CSV or JSON and drop it in your data directory. Claude Code can then cross-reference AI citations against your GSC and Ads data.
When I did this for our own site, we discovered two blog posts competing for the same AI citations on GEO-related queries.
One had 12 times as many Copilot citations as the other, despite both targeting similar intent. That led to a consolidation decision we wouldn’t have made based solely on traditional rank data. This kind of AI search cannibalization is something most SEO teams aren’t yet checking for.
If you don’t have a tracking platform
You don’t need an enterprise tool to start. There are several APIs that let you pull AI search data directly, and the costs are lower than you’d think.
DataForSEO AI Overview API: The most accessible option. Pay-as-you-go at about $0.01 per query, with a $50 minimum deposit. You send a keyword, and it returns the full AI Overview content from Google SERPs, including which URLs are cited. It also has a separate LLM Mentions API that tracks how LLMs reference brands across platforms.
# DataForSEO AI Overview — simplified example
payload = [{
"keyword": "best higher education marketing agencies",
"location_code": 2840, # US
"language_code": "en"
}]
response = requests.post(
"https://api.dataforseo.com/v3/serp/google/ai_overview/live/advanced",
headers=auth_headers,
json=payload
)
# Returns: AI Overview text, cited URLs, references
SerpApi: Starts at $75/month for 5,000 searches. Returns structured JSON for the full Google SERP, including AI Overviews. Good documentation, Python client library, and a free tier for testing.
SearchAPI.io: Similar to SerpApi, starts at $40/month. Also offers a separate Google AI Mode API that captures AI-generated answers with citations.
Bright Data SERP API: Pay-as-you-go starting around $1.80 per 1,000 requests. Set brd_ai_overview=2 to increase the likelihood of capturing AI Overviews. Also has an MCP server if you want tighter agent integration.
Bing Webmaster Tools: Free, and the only first-party AI citation data available from any major platform right now. Shows how often your content appears as a source in Copilot and Bing AI responses, with page-level data and the “grounding queries” that triggered citations. No API yet (Microsoft says it’s on the backlog), but you can export CSVs.
DIY: Direct LLM API Calls: The cheapest approach for small-scale monitoring. Write a Python script that sends a consistent set of prompts to the OpenAI, Anthropic, and Perplexity APIs, then parses responses for brand mentions. Perplexity’s Sonar API is especially useful here because it includes web citations in responses, and citation tokens are free. Total cost: under $20/month for a modest prompt library.
The general pattern: Pick one SERP API for Google AI Overview data, use Bing Webmaster Tools (it’s free), and supplement with direct LLM API calls or a dedicated tracker if budget allows.
The workflow in practice
So what does this actually look like on a Tuesday morning?
Setup: Once per client, ~15 minutes
Add service account email to client’s GSC and GA4
Get their Google Ads customer ID (or export search terms if they’re not in the MCC)
Create a config.json with property IDs
Monthly data pull: ~5 minutes
python3 run_fetch.py --sources gsc,ga4,ads
Analysis (as needed): Open Claude Code in the project directory and ask questions. The data is right there.
Output: Claude Code generates a markdown report. When I need something client-facing, I push it to Google Docs using a separate tool I built called google-docs-forge. It converts markdown into a properly formatted Google Doc, so the output doesn’t look like it came from a terminal.
The whole process takes about 35 minutes for a new client: setup, fetch, analysis. Monthly refreshes take about 20 minutes, including analysis time. Compare that to the manual alternative of downloading CSVs from three different platforms, cross-referencing in spreadsheets, and writing up findings.
What this doesn’t replace
I don’t want to oversell this. Claude Code is reading your data and finding patterns across sources faster than you can manually. It’s not telling you what to do about those patterns. You still need someone who understands the client’s business, their competitive situation, and what they’re actually trying to accomplish. The tool finds the interesting data. The strategist decides what to do with it.
You also need to verify what it gives you. LLMs can hallucinate, and that includes data analysis. I’ve seen Claude Code confidently report a number that didn’t match the JSON file. It’s rare, but it happens.
Treat the output like you’d treat work from a new analyst: trust but verify, especially before anything goes to a client. Spot-check the numbers against the source data. If something looks too clean or too dramatic, go look at the raw file.
It also doesn’t replace your existing platforms. If you need historical trend data, automated alerts, or a client-facing dashboard, you still want a Semrush or an Ahrefs. What this gives you is the ability to ask ad hoc questions across multiple data sources, which none of those platforms does well on their own.
And the GEO/AI visibility tracking space is still immature. The data from AI citation tools is directionally useful. Wind sock, not GPS. Google doesn’t publish AI Overview or AI Mode citation data through any official API, so every third-party tool is approximating. Bing’s Copilot data is the most reliable because it’s first-party, but it only covers the Microsoft ecosystem.
Start with GSC only. It’s the easiest API to connect (service account, read-only access, free). Fetch your queries and pages for the last 90 days. Ask Claude Code to group queries by topic, identify page-2 ranking opportunities, and find pages with high impressions but low CTR.
Add GA4 second. Same service account. Now you can ask cross-source questions: “Which pages rank well in GSC but have high bounce rates in GA4?”
Add Google Ads when you’re ready. The OAuth setup is more involved, but the paid-organic gap analysis alone justifies the effort.
Layer in AI visibility last. Start with Bing Webmaster Tools (free) and one SERP API for AI Overview data.
Each layer builds on the last. You don’t need all four to get value. The GSC + GA4 combination alone surfaces insights that take hours to find manually.
Chrome 146 has introduced an early preview of WebMCP behind a flag. WebMCP (Web Model Context Protocol) is a proposed web standard that exposes structured tools on websites, showing AI agents exactly what actions they can take and how to execute them.
Here’s some context around what that actually means.
The internet was originally built for humans. We designed buttons, dropdowns, and forms for people to read, understand, and use. But now there’s a new type of user emerging: AI agents. Soon, they’ll be able to complete registrations, buy tickets, and take any action needed to complete a goal on a website.
Right now, AI agents face a major challenge. They must crawl websites and reverse-engineer how everything works. For example, to book a flight, an agent needs to identify the right input fields, guess the correct data format, and hope nothing breaks in the process. It’s inefficient.
The WebMCP standard will solve this issue by exposing the structure of these tools so AI agents can understand and perform better.
A deeper understanding of WebMCP
Let’s say you need to book a flight.
Without WebMCP: An AI agent would crawl the page looking for a button that would say something like “Book a Flight” or “Search Flights.” The agent reads the screen, guesses which fields need what information, and hopes the form accepts its input.
With WebMCP: Instead of thinking “I need to find a ‘Book a Flight’ button,” the agent thinks “I need to call the bookFlight() function with clear parameters (date, origin/destination, passengers) and receive a structured result. The agent doesn’t search for visual elements. It calls a function, just like developers do when working with APIs.
How WebMCP works
WebMCP provides JavaScript APIs and HTML form annotations so AI agents know exactly how to interact with the page’s tools. It works using basically three steps:
Discovery: What tools does this page support? Checkout, BookFlight, searchProducts.
JSON Schemas: The exact definitions of what inputs are expected and what outputs come back.
State: Tools can be registered and unregistered based on the current page state. For example, a checkout tool only appears when items are in the cart, or a bookFlight tool becomes available after a user selects dates. This ensures agents only see relevant actions for the current situation.
Your website exposes a list of actions, each one describing what it does, what inputs it accepts, what outputs it returns, and what permissions it requires.
AI agents are quickly becoming part of our daily workflows. Soon we won’t book our own flights, fill out forms, or publish content — we’ll ask an AI to do it for us.
But right now, AI agents struggle to interact with websites reliably. They currently use two imperfect approaches:
Automation (fragile and unreliable): In this approach, the AI agent reads the screen, clicks buttons, and types into fields, just like a human would. However, websites are constantly updated. Button colors change. Field names change. Classes change. A/B tests create different versions of the same page. What worked yesterday may not work today.
APIs (limited availability): APIs provide a direct, structured way for agents to interact with websites. The problem is that most websites don’t have public APIs, and those that do are often missing key features or data that’s available through the user interface.
WebMCP: The missing middle ground
WebMCP fills the gap between those two approaches. It lets websites expose actions in a way that matches how the web actually works. Think of it as making your existing web interface readable by AI agents — without the fragility of UI automation or the overhead of maintaining a separate API.
The growth opportunity
Just as websites optimized for search engines in the 2000s, WebMCP represents the next evolution: optimization for AI agents. Early adopters who implement WebMCP could gain a competitive advantage as AI-powered search and commerce become mainstream.
But this isn’t just about SEO anymore. It’s about seizing a broader growth opportunity. SEO, AEO (AI engine optimization), and agentic optimization are all knowledge areas with one common goal: improving revenue. WebMCP opens the door to being not just discoverable, but directly actionable by the agents your future customers will use.
Real-world examples
To make this more concrete, here are some scenarios where WebMCP changes the game:
B2B scenarios
Quote and proposal requests: Industrial suppliers expose a request_quote tool. A buyer’s agent can submit identical RFQs across multiple vendors without adapting to each site’s unique form.
Vendor qualification filtering: Service providers expose a search_capabilities tool. Before making contact, a procurement agent can query multiple vendors to filter for specific certifications or geographic coverage.
Freight and logistics rate shopping: Carriers expose get_shipping_rate and schedule_pickup tools. A logistics agent can query multiple carriers and book the best option without navigating unique quoting interfaces.
Commercial insurance quoting: Insurance carriers expose request_policy_quote tools. A broker’s agent can submit the same business information across multiple carriers to compare coverage options without re-entering details on each insurer’s portal.
Wholesale and distribution ordering: Distributors expose check_inventory and get_volume_pricing tools. A purchasing agent can query stock levels and pricing across multiple distributors and place an order with whichever offers the best combination of availability and price.
B2C scenarios
Multi-retailer price comparison: Retailers expose search_products and check_price tools. A consumer’s agent can query multiple stores, compare options, and add items to the cart at whichever retailer offers the best deal — all in seconds.
Restaurant discovery and booking: Restaurants expose browse_menu and reserve_table tools. An agent can find availability and search menus across multiple restaurants before booking a table at the best match for your preferences.
Local service provider quoting: Service businesses expose check_availability and request_quote tools. If you need a plumber or electrician, for example, an agent can collect quotes from multiple providers without you having to fill out intake forms on each company’s website.
Travel planning across providers: Airlines, hotels, and car rentals expose search_availability and book_reservation tools. An agent can query multiple providers directly and assemble a complete itinerary without an aggregator like Expedia or Kayak.
Real estate search and tour scheduling: Listing sites expose search_properties and schedule_showing tools. A buyer’s agent can search across different platforms and book property tours without navigating unique forms.
WebMCP gives developers two ways to make their websites agent-ready:
Imperative API
Declarative API
The Imperative API
The Imperative API lets developers define tools programmatically through a new browser interface called navigator.modelContext. You register a tool by giving it a name, a description, an input schema, and an execute function.
Here’s a simplified example of an ecommerce product search tool:
The agent sees the tool, understands what it does, knows what input it needs, and can call it directly.
Developers can register tools one at a time with registerTool(), replace the full set with provideContext() (useful when your app’s state changes significantly), or remove them with unregisterTool() and clearContext().
The Declarative API
The Declarative API transforms standard HTML forms into agent-compatible tools by adding a few HTML attributes.
Here’s a simplified example of a restaurant reservation form:
By adding toolname and tooldescription to a form, the browser automatically translates its fields into a structured schema that AI agents can interpret. When an agent calls the tool, the browser populates the fields and, if toolautosubmit is set, it submits the form automatically.
The big takeaway: Existing websites with standard HTML forms can become agent-compatible with minimal code changes.
Implementation best practices from Google’s documentation
Use specific action verbs: Name tools based on what they actually do. Use create-event if the tool immediately creates an event. Use start-event-creation-process if it redirects the user to a UI form. Clear naming helps agents choose the right tool for the task.
Accept raw user input: Don’t ask the agent to perform calculations or transformations. If a user says “11:00 to 15:00,” the tool should accept those strings, not require the agent to convert them to minutes from midnight.
Validate in code, not just in schema: Schema constraints provide guidance, but they’re not foolproof. When validation fails, return descriptive error messages so the agent can self-correct and retry.
Keep tools atomic and composable: Each tool should do one specific thing. Avoid creating overlapping tools with subtle differences. Let the agent handle the workflow logic.
Return after the UI updates: When a tool completes an action, make sure the UI reflects that change before returning. Agents often verify success by checking the updated interface, then use that information to plan their next step.
How to try WebMCP today
WebMCP is currently available as an early preview behind a feature flag in Chrome 146. It’s not production-ready yet, but developers and curious teams can already experiment with it.
Open Chrome and navigate to chrome://flags/#enable-webmcp-testing
Find the “WebMCP for testing” flag and set it to “Enabled”
Relaunch Chrome to apply the changes
Once the flag is enabled, you can install the Model Context Tool Inspector Extension to see WebMCP in action. The extension lets you inspect registered tools on any page, execute them manually with custom parameters, or test them with an AI agent using Gemini API support. Google also has a live travel demo where you can see the full flow, from discovering tools to invoking them with natural language.
What all this means going forward
In the same way that mobile-first design changed how we build websites, agent-ready design could define the next generation of web applications.
That said, WebMCP is still in early preview. The final version will likely change. The Chrome team is actively discussing rolling back parts of what they’ve been building with the embedded LLM API (like summarization and other features). So what we’re seeing now is a starting point, not the finished product.
WebMCP is simply the next chapter in AI optimization. While aiming for discoverability and citation is still essential, WebMCP opens up a new opportunity for brands — making entire web experiences and functionality accessible to AI agents. It’s not just about being found or cited. It’s about being usable by the next generation of web users.
Start experimenting with WebMCP, but don’t bet your roadmap on it yet. The standard is evolving, and early adopters will have an advantage, but only if they stay flexible as the standard matures.
The websites that win in an agent-driven web will be those that make it easy for AI to complete tasks, not just find information.
Google posted a new help document on “Things to know about Google’s web crawling.” While many of those “things to know” are already known, Google felt it would be a good idea to make this document in order to provide “basic educational information about crawling to better highlight various resources about crawling that are available to site owners.”
The document has 9 items posted in it right now including:
Frequent crawling is a good sign! Google wrote,
“If we’re crawling your site a lot, it’s an indication your pages have fresh or highly relevant content that people want to find, and that our systems are recognizing that demand. Online shopping is a great example: we crawl ecommerce sites often so that our results will display retailers’ most up-to-date prices, promotions, and inventory status.”
Other items. Here is the full list, but make sure to check out the help document to read it all. None of it is new but it is a helpful refresher:
What is crawling? In short, crawling is how Google “sees” the web
We have many crawlers; they each have important jobs
We perform repeat crawls to find the latest updates and to provide the freshest search results
Frequent crawling is a good sign!
Google’s crawling has grown over time as pages have become more complex
We optimize crawling automatically
Google crawlers never go into paywall or subscription content without permission
Site owners have control over what gets crawled, and how
Our standard crawlers always respect websites’ choices about how their content is accessed and used
Why we care. Crawling is a fundamental requirement for SEO and being found in Google Search and other Google surfaces. This help document might help you quickly understand how Google crawling works and what you can aim to do to improve your site’s crawlability.
Google will begin enforcing a minimum daily budget for Demand Gen campaigns starting April 1, 2026.
What’s happening: The Google Ads API will require a minimum daily budget of $5 USD (or local equivalent) for all Demand Gen campaigns. The change is designed to help campaigns move through the “cold start” phase with enough spend for Google’s models to learn and optimize effectively. The update will roll out as an unversioned API change, applying across all buying paths.
Technical details:
In API v21 and above, campaigns set below the threshold will trigger a BUDGET_BELOW_DAILY_MINIMUM error, with additional details available in the error metadata.
In API v20, advertisers will receive a generic UNKNOWN error, with the specific validation failure referenced in the unpublished error code field.
The rule applies when modifying budgets, start dates, or end dates in ways that push daily spend below the $5 floor — covering both daily and flighted budgets.
Impact on existing campaigns. Current Demand Gen campaigns running below the minimum will continue serving. However, any future edits to budgets or scheduling will require compliance with the new floor.
Why we care. For advertisers and developers, this adds a new compliance layer to campaign management workflows. Systems will need updating to catch and handle the new validation errors before deployment.
The bottom line. Google is standardizing a minimum investment threshold for Demand Gen — prioritizing performance stability, while requiring advertisers to adjust budgets and automation accordingly.
AI recommendations are inconsistent for some brands and reliable for others because of cascading confidence: entity trust that accumulates or decays at every stage of an algorithmic pipeline.
The mechanics behind that shift sit inside the AI engine pipeline. Here’s how it works.
The AI engine pipeline: 10 gates and a feedback loop
Every piece of digital content passes through 10 gates before it becomes an AI recommendation. I call this the AI engine pipeline, DSCRI-ARGDW, which stands for:
Discovered: The bot finds you exist.
Selected: The bot decides you’re worth fetching.
Crawled: The bot retrieves your content.
Rendered: The bot translates what it fetched into what it can read.
Indexed: The algorithm commits your content to memory.
Annotated: The algorithm classifies what your content means across dozens of dimensions.
Recruited: The algorithm pulls your content to use.
Grounded: The engine verifies your content against other sources.
Displayed: The engine presents you to the user.
Won: The engine gives you the perfect click at the zero-sum moment in AI.
After “won” comes an 11th gate that belongs to the brand, not the engine: served. What happens after the decision feeds back into the AI engine pipeline as entity confidence, making the next cycle stronger or weaker.
DSCRI is absolute. Are you creating a friction-free path for the bots?
ARGDW is relative. How do you compare to your competition? Are you creating a situation in which you’re relatively more “tasty” to the algorithms?
Both sides of the AI engine pipeline are sequential. Each gate feeds the next.
Content entering DSCRI through the traditional pull path passes through every gate. Content entering through structured feeds or direct data push can skip some or all of the infrastructure gates entirely, arriving at the competitive phase with minimal attenuation.
Skipped gates are a huge win, so take that option wherever and whenever you can. You “jump the queue” and start at a later stage without the degraded confidence of the previous ones. That changes the economics of the entire pipeline, and I’ll come back to why.
Why the four-step model falls short
The four-step model the SEO industry inherited from 1998 — crawl, index, rank, display — collapses five distinct infrastructure processes into “crawl and index” and five distinct competitive processes into “rank and display.”
It might feel like I’m overcomplicating this, but I’m not. Each gate has nuance that merits its standalone position. If you have empathy for the bots, algorithms, and engines, remove friction, and make the content digestible, they’ll move you through each gate cleanly and without losing speed.
Each gate is an opportunity to fail, and each point of potential failure needs a different diagnosis. The industry has been optimizing a four-room house when it lives in a 10-room building, and the rooms it never enters are the ones where the pipes leak the worst.
Most SEO advice operates at the selection, crawling, and rendering gates. Most GEO advice operates at “displayed” and “won,” which is why I’m not a fan of the term.
Most teams aren’t yet working on annotation and recruitment, which are actually where the biggest structural advantages are created.
Three audiences you need to cater to and three acts you need to master
The AI engine pipeline has an entry condition — discovery — and nine processing gates organized in three acts of three, each with a different primary audience.
Act I: Retrieval (selection, crawling, rendering)
The primary audience is the bot, and the optimization objective is frictionless accessibility.
The primary audience is the algorithm, and the optimization objective is being worth remembering: verifiably relevant, confidently annotated, and worth recruiting over the competition.
Act III: Execution (grounding, display, won)
The primary audience is the engine and, by extension, the person using the engine, where the optimization objective is being convincing enough that the engine chooses and the person acts.
Frictionless for bots, worth remembering for algorithms, and convincing for people. Content must pass every machine gate and still persuade a human at the end.
The audiences are nested, not parallel. Content can only reach the algorithm through the bot and can only reach the person through the algorithm. You can have the most impeccable expertise and authority credentials in the world. If the bot can’t process your page cleanly, the algorithm will never see it.
This is the nested audience model: bot, then algorithm, then person. Every optimization strategy should start by identifying which audience it serves and whether the upstream audiences are already satisfied.
Discovery: The system learns you exist
Discovery is binary. Either the system has encountered your URL or it hasn’t. Fabrice Canel, principal program manager at Microsoft responsible for Bing’s crawling infrastructure, confirmed:
“You want to be in control of your SEO. You want to be in control of a crawler. And IndexNow, with sitemaps, enable this control.”
The entity home website, the canonical web property you control, is the primary discovery anchor. The system doesn’t just ask, “Does this URL exist?” It asks, “Does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.
The push layer — IndexNow, MCP, structured feeds — changes the economics of this gate entirely. A later piece in this series is dedicated to what changes when you stop waiting to be found.
Act I: The bot decides whether to fetch your content
Selection: The system decides whether your content is worth crawling
Not everything that’s discovered gets crawled. The system makes a triage decision based on countless signals, including entity authority, freshness, crawl budget, perceived value, and predicted cost.
Selection is where entity confidence first translates into a concrete pipeline advantage. The system already has an opinion about you before it crawls a single page. That opinion determines how many of your pages it bothers to look at.
Crawling: The bot arrives and fetches your content
Every technical SEO understands this gate. Server response time, robots.txt, redirect chains. Foundational, but not differentiating.
What most practitioners miss is that the bot doesn’t arrive in a vacuum. Canel confirmed that context from the referring page can be carried forward during crawling. With highly relevant links, the bot carries more context than it would from a link on an unrelated directory.
Rendering: The bot builds the page the algorithm will see
This is where everything changes and where most teams aren’t yet paying attention. The bot executes JavaScript if it chooses to, builds the Document Object Model (DOM), and produces the full rendered page.
But here’s a question you probably haven’t considered: how much of your published content does the bot actually see after this step? If bots don’t execute your code, your content is invisible. More subtly, if they can’t parse your DOM cleanly, that content loses significant value.
Google and Bing have extended a favor for years: they render JavaScript. Most AI agent bots don’t. If your content sits behind client-side rendering, a growing proportion of the systems that matter simply never see it.
Representatives from both Google and Bing have also discussed the efforts they make to interpret messy HTML. Here’s one way to look at it: search was built on favors, and those favors aren’t being offered by the new players in AI.
Importantly, content lost at rendering can’t be recovered at any downstream gate. Every annotation, grounding decision, and display outcome depends on what survives rendering. If rendering is your weakest gate, it’s your F on the report card. Everything downstream inherits that grade.
Act II: The algorithm decides whether your content is worth remembering
This is where most brands are losing out because most optimization advice doesn’t address the next two gates. And remember, if your content fails to pass any single gate, it’s no longer in the race.
Indexing: Where HTML stops being HTML
Rendering produces the full page as the bot sees it. Indexing then transforms that DOM into something the system can store. Two things happen here that the industry often misses:
The system strips the navigation, header, footer, and sidebar — elements that repeat across multiple pages on your site. These aren’t stored per page. The system’s primary goal is to identify the core content. This is why I’ve talked about the importance of semantic HTML5 for years. It matters at a mechanical level: <nav>, <header>, <footer>, <aside>, <main>, and <article> tell the system where to cut. Without semantic markup, it has to guess. Gary Illyes confirmed at BrightonSEO in 2017, possibly 2018, that this was one of the hardest problems they had at the time.
The system chunks and converts. The core content is broken into blocks or passages of text, images with associated text, video, and audio. Each chunk is transformed into a proprietary internal format. Illyes described the result as something like a folder with subfolders, each containing a typed chunk. The page becomes a hierarchical structure of typed content blocks.
I call this conversion fidelity: how much semantic information survives the strip, chunk, convert, and store sequence. Rendering fidelity (Gate 3) measures whether the bot could consume your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away.
Both fidelity losses are irreversible, but they fail differently. Rendering fidelity fails when JavaScript doesn’t execute or content is too difficult for the bot to parse. Conversion fidelity fails when the system can’t identify which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships between elements don’t survive the format conversion.
Something we often overlook is that even after a successful crawl, indexing isn’t guaranteed. Content that passes through crawl and render may still not be indexed.
That might sound bad enough, but here’s a distinction that should concern you: indexing and annotation are separate processes. Content may be indexed but poorly annotated — stored in the system but semantically misclassified. Non-indexed content is invisible. Misannotated content actively confuses the system about who you are, which can be worse.
Annotation: Where entity confidence is built or broken
This is the gate most of the industry has yet to address.
Think of annotations as sticky notes on the indexed “folders” created at the indexing gate. Indexing algorithms add multiple annotations to every piece of content in the index.
I identified 24 annotation dimensions I felt confident sharing with Canel. When I asked him, his response was, “Oh, there is definitely more.”
Those 24 dimensions were organized across five annotation layers:
Gatekeepers (scope classification).
Core identity (semantic extraction).
Selection filters (content categorization).
Confidence multipliers (reliability assessment).
Extraction quality (usability evaluation).
There are certainly more layers, and each layer likely includes more dimensions than I’ve mapped. Hundreds, probably thousands. This is an open model. The community is invited to map the dimensions I’ve missed.
Annotation is where the system decides the facts:
What your content is about.
Where it fits into the wider world.
How useful it is.
Which entity it belongs to.
What claims it makes.
How those claims relate to claims from other sources.
Credibility signals — notability, experience, expertise, authority, trust, transparency — are evaluated here. Topical authority is assessed here, too, along with much more.
Annotation operates on what survives rendering and conversion. If critical information was lost at either gate, the annotation system is working with degraded raw material. It annotates what the annotation engine received, not what you originally published.
Canel confirmed a principle I suggested that should reshape how we think about this gate: “The bot tags without judging. Filtering happens at query time.” Annotation quality determines your eligibility for every downstream triage.
I have a full piece coming on annotation alone. For now, annotation is the gate where most brands silently lose and the one most worth working on.
Recruitment: Where the algorithmic trinity decides whether to absorb you
This is the first explicitly competitive gate. After annotation, the pipeline feeds into three systems simultaneously.
Search engines recruit content for results pages (the document graph).
Knowledge graphs recruit structured facts for entity representation (the entity graph).
Large language models recruit patterns for training data and grounding retrieval (the concept graph).
Before recruitment, the system found, crawled, stored, and classified your content. At recruitment, it decides whether your content is worth keeping over alternatives that serve the same purpose.
Being recruited by all three elements of the algorithmic trinity gives you a disproportionate advantage at grounding because the grounding system can find you through multiple retrieval paths, and at display because there are multiple opportunities for visibility.
Recruitment is the structural advantage that separates brands with consistent AI visibility from brands that appear inconsistently.
Act III: The engine presents and the decision-maker commits
Grounding: Where AI checks its confidence in the content against real-time evidence
This is the gate that separates traditional search from AI recommendations.
Ihab Rizk, who works on Microsoft’s Clarity platform, described the grounding lifecycle this way:
The user asks a question.
The LLM checks its internal confidence. If it’s insufficient, it sends cascading queries, multiple angles of intent designed to triangulate the answer, which many people call fan-out queries.
Bots are dispatched to scrape selected pages in real time.
The answer is generated from a combination of training data and fresh retrieval.
But grounding isn’t just search results, as many people believe. The other two technologies in the algorithmic trinity play a role.
The knowledge graph is used to ground facts. AI Overviews explicitly showed information grounded in the knowledge graph. It’s reasonable to assume specialized small language models are used to ground user-facing large language models.
The takeaway is that your content’s performance from discovery through recruitment determines whether your pages are in the candidate pool when grounding begins. If your content isn’t indexed, isn’t well annotated, or isn’t associated with a high-confidence entity, it won’t be in the retrieval set for any part of the trinity. The engine will ground its answer on someone else’s content instead.
You can’t optimize for grounding if your content never reaches the grounding stage.
Display: The output of the pipeline
Display is where most AI tracking tools operate. They measure what AI says about you. But by the time you’re measuring display, the decisions were already made upstream, from discovery through grounding.
Display is where AI meets the user. It also covers the acquisition funnel, which is easy to understand and meaningful for marketers. This is where most businesses focus because it’s visible and sits just before the click. I’ll write a full article on that later in this series.
Won: The moment the decision-maker commits
Won is the terminal processing gate in the AI engine pipeline. Ten gates of processing, three acts of audience satisfaction, and it comes down to this: Did the system trust you enough to commit?
The accumulated confidence at this gate is called “won probability,” the system’s calculated likelihood that committing to you is the right decision. Three resolutions are possible, and they form a spectrum. To understand why that spectrum matters, you need to understand the 95/5 rule.
Professor John Dawes at the Ehrenberg-Bass Institute demonstrated that at any given moment, only about 5% of potential buyers are actively in-market. The other 95% aren’t ready to purchase. You sell to the 5%, but the real job of marketing is staying top of mind for the other 95% so that when they decide to move to purchase, on their schedule, not yours, you’re the brand they think of.
The three scenarios that follow show how AI takes over the job of being top of mind at the critical moment for the 95%. I call this top of algorithmic mind.
The imperfect click: The person browses a list of options, pogo-sticks between results, and decides. Traditional search and what Google called the zero moment of truth. The system doesn’t know who is ready. It shows everyone the same list and hopes. The 95/5 efficiency is low. You’re hitting and hoping, and so is the engine.
The perfect click: The AI recommends one solution and the person takes it. I call this the zero-sum moment in AI. This is where we are right now with assistive engines like ChatGPT, Perplexity, and AI Mode. The system has filtered for intent, context, and readiness. It presents one answer to a person moving from the 95% into the 5% with much higher precision.
The agential click: The agent commits, either after pausing for human approval, “Shall I book this?” or autonomously. The agent caught the moment of readiness, did the work, and closed it. Maximum precision. This is the ultimate solution to the 95/5 problem: AI catches the exact moment and acts.
Search won’t disappear. Most people will always want to browse some of the time. Window shopping is fun, and emotionally charged decisions aren’t something people will always delegate.
The trajectory, however, moves from imperfect to perfect to agential. Brands need to optimize for all three outcomes on that spectrum, starting now. Optimizing for agents should already be part of your strategy, as should optimizing for assistive engines and search engines. AAO covers them all.
Search engines, AI assistive engines, and assistive agents are your untrained salesforce. Your job is to train them well enough that you’re top of algorithmic mind at the moment the 95% become the 5%, and the AI either:
After conversion, the brand takes over. You should optimize the post-won feedback gate. The processing pipeline, the DSCRI-ARGDW spine, gets you to the decision. Served sits outside that spine as the gate that closes the loop, turning the line into a circle.
Every “won” that produces a positive outcome strengthens the next cycle’s cascading confidence. Every “won” that produces a negative outcome weakens it. Ten gates get you to the decision. The 11th, served, determines whether the decision repeats and your advantage compounds.
This is where the business lives. Acquisition without retention is a leak, both directly and indirectly through the AI engine pipeline feedback loop.
Brands that engineer their post-won experience to generate positive evidence, reviews, repeat engagement, low return rates, and completion signals, build a flywheel. Brands that neglect post-won burn confidence with every cycle.
Diagnosing failure in the pipeline
The three acts — bot, algorithm, engine, or person — describe who you’re speaking to. The two phases describe what kind of test you’re taking.
Phase 1: Infrastructure, discovery through indexing
Absolute tests. You either pass or fail. A page that can’t be rendered doesn’t get partially indexed. Infrastructure gates are binary: pass or stall.
Phase 2: Competitive, annotation through won
Relative tests. Winning depends not just on how good your content is but on how good the competition is at the same gate.
The practical implication is infrastructure first, competitive second. If your content isn’t being found, rendered, or indexed correctly, fixing annotation quality is wasted effort. You’re decorating a room the building inspector hasn’t cleared.
In practice, brands tend to fail in three predictable ways.
Opportunity cost (Act I: Bot failures)
Your content isn’t in the system, so you have zero opportunity. Cheapest to fix, most expensive to ignore.
Competitive loss (Act II: Algorithm failures)
Your content is in the system, but competitors’ content is preferred. The brand believes it’s doing everything right while AI systems consistently choose a competitor at recruitment, grounding, and display.
Conversion leak (Act III: Engine failures)
Your content is presented, but the system hedges or fumbles the recommendation. In short, you lose the sale.
Every gate you pass still costs you signal
In 2019, I published How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Illyes about how Google calculates ranking bids by multiplying individual factor scores. A zero on any factor kills the entire bid.
Darwin’s natural selection works the same way: fitness is the product across all dimensions, and a single zero kills the organism. Brent D. Payne made this analogy: “Better to be a straight C student than three As and an F.”
As with Google’s bidding system, cascading confidence is multiplicative, not additive. Here’s what that means:
Per-gate confidence
Surviving signal at the won gate
90%
34.9%
80%
10.7%
70%
2.8%
60%
0.6%
50%
0.1%
Illustrative math, not a measurement. The principle is what matters: strengths don’t compensate for weaknesses in a multiplicative chain.
A single weak gate destroys everything. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. A near-zero anywhere in a multiplicative chain makes the whole chain near-zero.
This is competitive math. If your competitors are all at 50% per gate and you’re at 60%, you win: 0.6% surviving signal against their 0.1%. Not because you’re excellent, but because you’re less bad.
Most brands aren’t at 90%. The worse your gates are, the bigger the gap a small improvement opens. Here’s an example.
Gate
D
S
C
R
I
A
Re
G
Di
W
Surviving Signal
Discovered
Selected
Crawled
Rendered
Indexed
Annotated
Recruited
Grounded
Displayed
Won
Your Brand
75%
80%
70%
85%
75%
5%
80%
70%
75%
80%
0.4%
Competitor
65%
60%
65%
70%
60%
60%
65%
60%
65%
60%
1.8%
I chose annotated as the “F” grade in this example for demonstrative purposes.
Annotation is the phase-boundary gate. It’s the hinge of the whole pipeline. If the system doesn’t understand what your content is, nothing downstream matters.
Applying this Darwinian principle across a 10-gate pipeline, where confidence is measurable at every transition, is my diagnostic model. I recently filed a patent for the mechanical implementation.
Improving gates versus skipping them
There are two ways to increase your surviving signal through the pipeline, and they aren’t equal.
Improving your gates
Better rendering, cleaner markup, faster servers, and schema help the system classify your content more accurately. These are real gains, single-digit to low double-digit percentage improvements in surviving signal.
For many brands and SEOs, this is maintenance rather than transformation. It matters, and most brands aren’t doing it well, but it’s incremental.
Skipping gates entirely
Structured feeds, Google Merchant Center and OpenAI Product Feed Specification, bypass discovery, selection, crawling, and rendering altogether, delivering your content to the competitive phase with minimal attenuation.
MCP connections skip even further, making data available from recruitment onward with triple-digit percentage advantages over the pull path.
If you’re only improving gates, you’re leaving an order of magnitude on the table.
The highest-value target is always the weakest gate
Improving your best gate from 95% to 98% is nearly invisible in the pipeline math. Improving your worst gate from 50% to 80% transforms your entire surviving signal. That’s the Darwinian principle at work: fitness is multiplicative, the weakest dimension determines the outcome, and strengths elsewhere can’t compensate.
Most teams are optimizing the wrong gate. Technical SEO, content marketing, and GEO each address different gates. Each is necessary, but none is sufficient because the pipeline requires all 10 to perform. Teams pouring budget into the two or three gates they understand are ignoring the ones that are actually killing their signal.
Then there’s the single-system mistake. At recruitment, the pipeline feeds into three graphs, the algorithmic trinity. Missing one graph means one entire retrieval path doesn’t include you.
You can be perfectly optimized for search engine recruitment and completely absent from the knowledge graph and the LLM training corpus. In a multiplicative system, that gap compounds with every cycle.
Most of the AI tracking industry is measuring outputs without diagnosing inputs, tracking what AI says about you at display when the decisions were already made upstream. That’s like checking your blood pressure without diagnosing the underlying condition.
The tools to do this properly are emerging. Authoritas, for example, can inspect the network requests behind ChatGPT to understand which content is actually formulating answers. But the real work is at the gates upstream of display, where your content either passed or stalled before the engine ever opened its mouth.
The correct audit order is pipeline order. Start at discovery and work forward.
If content isn’t being discovered, nothing downstream matters. If it’s discovered but not selected for crawling, rendering fixes are wasted effort. If it’s crawled but renders poorly, every annotation and grounding decision downstream inherits that degradation.
This is your new plan: Find the weakest gate. Fix it. Repeat.
The inconsistency Fishkin documented is a training deficit. The AI engine pipeline is trainable. The training compounds. The walled gardens increase their lock-in with every cycle.
The brand that trains its AI salesforce better than the competition doesn’t just win the next recommendation. It makes the next one easier to win, and the one after that, until the gap widens to the point where competitors can’t close it without starting from scratch.
Without entity understanding, nothing else in this pipeline works. The system needs to know who you are before it can evaluate what you publish. Get that right, build from the brand up through the funnel, and the compounding does the rest.
Next: The five infrastructure gates the industry compressed into ‘crawl and index’
The next piece opens the infrastructure gates in full: rendering fidelity, conversion fidelity, JavaScript as a favor, not a standard, structured data as the native language of the infrastructure phase, and the investment comparison that puts numbers on improving gates versus skipping them entirely.
The sequential audit shows where your content is dying before the algorithm ever sees it, and once you see the leaks, you can start plugging them in the order that moves your surviving signal the most.
To help you prevent a single policy issue from snowballing into a full account suspension, here’s how Google’s three-strike system works and what you should do at every stage to keep your ads running.
Case study: Appealing a Google Ads strike
Over the past 10+ years, I’ve helped thousands of advertisers identify and resolve Google’s policy concerns so that their businesses can resume running ads. One such situation involved helping a business that sells ceremonial swords for military dress uniforms.
Google’s Other Weapons policy prohibits advertising swords intended for combat. However, that same policy permits the advertising of non-sharpened, ceremonial swords, which is what this business sells. Even though this business was properly advertising its products within Google’s ad policy parameters, Google issued them a warning for violating the Other Weapons policy.
After the warning, we documented for Google that the business wasn’t violating Google’s policy. We also added specific disclaimers to the business’s sword product pages, noting that the swords were only ceremonial. Frustratingly, Google decided to issue a first strike to the business anyway.
We appealed the strike because the business wasn’t violating Google’s policy. But Google quickly denied that appeal. We tried appealing again, and Google denied the second appeal. The ad account remained on hold with no ads serving, and the business was losing revenue.
Ultimately, we had to “acknowledge” the strike to Google (I’ll explain what that means later) so that the ads would resume serving. We then worked with Google to craft more precise disclaimer language, stating that the swords for sale were ceremonial blades and not sharpened for use as weapons. This disclaimer was added to the business’s website footer so that both Google’s robots and human reviewers could see it on every single page (regardless of whether swords were for sale on a particular page).
Because of all these changes, Google’s concerns were satisfied and the business has never received any subsequent warnings or strikes. The end result was a success, even though technically there should never have been a warning or strike issued because an actual policy violation never occurred.
Key takeaway: Google will sometimes incorrectly issue warnings and strikes, and even reject appeals, and will often require excessive website disclaimers to convince them that all is well.
Understanding Google’s strikes system can save your ads account from suspension. The search giant adheres to a system that begins with an initial warning and is followed by a “three strikes and you’re out” protocol.
The warning: Your ‘mulligan’ opportunity
Before issuing your ad account an initial strike, Google will first send you a warning notification.
This warning informs you that there’s a problem and allows you to address and resolve Google’s concern before your account is penalized with an official strike.
The penalty: None (yet). Your ads can continue to run.
What to do: Appeal any ad/asset disapprovals if you’re confident Google made a mistake, or identify the issue and replace the disapproved ads/assets with fully compliant versions
Treat warnings seriously — ignoring them likely ensures your account will begin receiving strikes.
Strike 1: At least three days without ads
If Google decides that the same policy violation still exists after a warning was issued, your ad account will receive its first official strike.
The penalty: All ads will stop serving for three full days.
What to do: Acknowledge or appeal the strike.
Acknowledge the strike
This is your fastest path back to serving ads. But Google counts strikes as cumulative over a 90-day period.
If you acknowledge the strike rather than successfully appeal it, you’ve started the clock on the possibility of three strikes and a permanent suspension. Deciding which approach is best is a case-by-case determination.
To acknowledge the strike, you must:
Remove all ads/assets that violate Google’s cited policy
You understand the policy Google says you violated.
You have removed all violations.
You will comply with Google’s policies from now on.
After you acknowledge the strike and the three-day hold ends, your ads will resume serving.
Appeal the strike
Submit this appeal form and explain why your ads aren’t violating Google’s policy. Keep in mind:
Your account remains on hold during Google’s review.
Reviews typically take 5+ business days, so be patient.
If Google accepts your appeal, they will remove the hold and your ads will resume serving.
If Google rejects your appeal, your account will stay on hold and no ads will serve.
After a rejected appeal, you can attempt appealing again or acknowledge the strike.
Appealing is often justified, but it costs time and success isn’t guaranteed (even if you’re in the right, as the earlier case study shows).
Strike 2: At least seven days without ads
If Google decides there’s been another policy violation within 90 days of resolving your first strike, or if your original violation was unresolved during those 90 days, your account will receive a second strike.
The penalty: All ads will stop serving for seven full days.
What to do: Your options are the same as for Strike 1: acknowledge or appeal the strike.
Strike 3: Your account is suspended
If Google decides there’s been another policy violation within 90 days of resolving your second strike, or if your previous violation was unresolved during those 90 days, your account will receive a third strike.
The penalty: Your account is suspended, and you may not run any ads or create a new ad account.
What to do: Your only recourse now is to appeal the suspension.
Successfully appealing a suspension is definitely possible. But the process is often a nightmare, and the results are never guaranteed.
Important: Once suspended, you’re unable to make any changes to your ad account.
Google is sometimes inconsistent at following their own rules. Here are two examples I’ve seen first-hand.
Successfully appealing a strike doesn’t always reset the 90-day clock
I have a client who acknowledged a first strike on June 25. They received a second strike on July 26, which they successfully appealed. You would think that should reset the 90-day counter back to June 25.
However, Google gave them another second strike on October 16, far beyond 90 days from the date of the first strike, but within 90 days from the date of the “first” second strike, which they successfully appealed.
Google sometimes automatically returns your account to ‘warning’ status after a first strike expires
I have a client who received a warning on August 7, followed by a first strike on September 7. They acknowledged the first strike, and that strike expired on December 6, 90 days after it was issued.
However, the account immediately reentered “warning” status, with a new 90-day clock starting from when the first strike expired. There was no new email notification about this warning, and the warning didn’t appear on the Strike history tab.
Look for a notification at the top of your Google Ads account.
Check the Policy manager page in your Google Ads account.
How do I see my history of strikes?
Go to the Strike historytab on the Policy manager page in your Google Ads account.
Can you get a strike without having ad disapprovals?
Yes. Google can issue strikes even if no ads are formally disapproved.
How are Google’s three- and seven-day ad holds calculated?
Google counts full days. For example, if you receive and acknowledge a first strike (a three-day hold) on January 1, your ads won’t be eligible to resume serving until January 4th.
Are account strikes worse than ad disapprovals?
Yes, account strikes are significantly worse than individual ad disapprovals. A strike prevents all your account’s ads from serving and can easily escalate to a full account suspension.
Which Google policies have the three-strikes rule?
Enabling dishonest behavior.
Unapproved substances.
Guns, gun parts, and related products.
Explosives.
Other weapons.
Tobacco.
Compensated sexual acts.
Mail-order brides.
Clickbait.
Misleading ad design.
Bail bond services.
Call directories, forwarding services, and recording services.
Credit repair services.
Binary options.
Personal loans.
Important: If you violate one of Google’s many other policies not listed above, you could find your ad account suspended immediately, with no warning or three-strikes system.
What you can do to prevent and navigate Google Ads strikes
Follow these best practices and tips to minimize the chances of receiving a Google Ads strike:
Read the Google Ads policies that apply to your industry so that you know what to do and what not to do.
Delete old ads and assets you no longer need, so they can’t trigger strikes unexpectedly.
Add clear and comprehensive disclaimers to your website that will help Google understand you’re complying with any ad policies you think they might otherwise decide you aren’t.
Save copies of any appeals you submit because Google won’t show them to you after they’re submitted.
If you receive an account strike, closely monitor the 90-day clock so you know when you’re safely out of the previous “strike” window.
Google understandably cares deeply about its reputation and the safety of its users. That’s why Google’s policy team often strictly enforces its advertising policies, and why they’re sometimes over-aggressive when interpreting and applying their own policy language.
To keep our Google Ads accounts in good health and our ads running, the best thing we can do as advertisers is to deeply understand Google’s advertising policies and requirements.
Always be ready to jump through hoops to explain your unique situations, and over-comply with Google’s edicts whenever feasible.
Meta is updating its ad measurement framework, aiming to simplify attribution in what it calls a “social-first” advertising world.
What’s happening. Meta is narrowing its definition of click-through attribution for website and in-store conversions. Going forward, only link clicks — not likes, shares, saves or other interactions — will count toward click-through attribution. The change is designed to reduce discrepancies between Meta Ads Manager and third-party tools like Google Analytics.
Between the lines. Social media has overtaken search as the world’s largest ad channel, according to WARC, but many attribution systems were built for search-era behaviors. On social platforms, engagement extends beyond link clicks. Historically, Meta counted all click types toward click-through conversions, while many third-party tools only counted link clicks — creating reporting misalignment.
What’s changing. Conversions previously attributed to non-link interactions will now fall under a renamed “engage-through attribution” (formerly engaged-view attribution). Meta is also shortening the video engaged-view window from 10 seconds to 5 seconds, reflecting faster conversion behavior — particularly on Reels. The company says 46% of Reels purchase conversions happen within the first two seconds of attention.
Why we care. This update makes it easier to see which actions actually drive conversions, reducing confusion between Meta reporting and third-party analytics like Google Analytics. By separating link clicks from other social interactions, marketers get a clearer view of campaign performance, while the new engage-through attribution captures the value of likes, shares, and saves.
This gives advertisers more confidence in their data and helps them make smarter, more impactful
Third-party tie-ins. Meta is partnering with analytics providers like Northbeam and Triple Whale to incorporate both clicks and views into attribution models, aiming to give advertisers a more complete performance picture.
The rollout. Changes will begin later this month for campaigns optimizing toward website or in-store conversions. Billing will not change, but reporting inside Ads Manager may shift as attribution definitions update.
The bottom line. Meta is attempting to balance clearer, search-aligned click reporting with better visibility into uniquely social interactions — giving advertisers cleaner comparisons across platforms while still capturing the incremental impact of engagement-driven conversions.
For more than a decade, the dominant model was simple — identify a keyword, write an article, publish, promote, rank, capture traffic, convert a fraction of visitors, and repeat. But that model is breaking.
Content marketing is collapsing and rebuilding simultaneously. AI systems now answer informational queries directly inside search results. Large language models (LLMs) synthesize known information instantly. Information production is accelerating faster than distribution capacity. Public feeds are already saturated.
The cost of producing content has fallen to nearly zero, while the cost of being seen has never been higher. That changes everything.
Here’s a system for content marketing in a world where being found is increasingly unlikely.
The decline of informational SEO
Informational SEO used to be treated as a growth opportunity. Publish enough articles targeting informational queries, and traffic would compound.
But traffic was always a proxy metric. It felt productive because dashboards moved. In reality, most content was never read deeply, rarely linked to, and often indistinguishable from competitors. Page 1 often contained 10 variations of the same article, each rewritten with minor differences.
Now, AI answers absorb demand directly. Users receive summaries without clicking. The known information layer of the web is becoming commoditized.
If your strategy relies on answering known informational questions, you’re competing with a machine trained on the entire web. Informational SEO is over as a strategy.
Search content will still matter, but its role shifts. It becomes closer to customer service and sales enablement. It exists to support conversion once intent is clear. It doesn’t build fame.
Content marketing, properly understood, must do something else entirely.
Growth hackers came in and took over SEO. Driven by the desire to show impressive charts to the board, they turned SEO from a practical channel into a landfill of skyscrapered, informational content that did little for real growth.
So, we need a reset. There are only two reasons to create content:
You’re in the publishing business.
You’re marketing a business.
If you’re in the second category, your content is advertising. That doesn’t mean banner ads. It means its job is to build mental availability. As advertising science has repeatedly shown, brands grow by increasing the likelihood of being thought of in buying situations and making themselves easy to purchase from.
The advertising analytics company System1 describes the three drivers of profit growth from advertising as fame, feeling, and fluency.
Fame means broad awareness.
Feeling means positive emotional association.
Fluency means easy recognition and processing.
If your content doesn’t contribute to those outcomes, it’s activity and not helping your growth.
SEO teams optimized for clicks, but clicks aren’t the objective. Being remembered is. In an AI era, this distinction becomes decisive.
Historically, content marketing relied heavily on pull: Someone searched, you ranked, and you pulled them from Google to your website. That channel is narrowing.
As AI summaries answer queries directly, the ability to pull strangers through informational search decreases. Pull remains critical for transactional queries and high-intent keywords, but the gravitational pull of informational content is weakening.
Push becomes more important. You have to push your content to people, distributing it intentionally through media, partnerships, events, advertising, communities, and networks rather than waiting to be discovered. It must be placed directly in front of people.
The paradox is this: We once believed gatekeeping had disappeared. Social media and Google created the illusion of fair and direct access. Now, gatekeepers are back — algorithms, publishers, influencers, media outlets, and even AI systems themselves.
When channels are flooded, selection mechanisms tighten.
Kevin Kelly wrote in his book “The Inevitable” that work has no value unless it’s seen. An unfound masterpiece, after all, is worthless.
As tools improve and creation becomes frictionless, the number of works competing for attention expands exponentially, with each new work adding value while increasing noise.
Kelly’s point was that in a world of infinite choice, filtering becomes the dominant force. Recommendation systems, algorithms, media editors, and social networks become the arbiters of visibility. When there are millions of books, songs, apps, videos, and articles, abundance concentrates attention, creating a structural shift.
When production is scarce, quality alone can surface work. When production is abundant, discoverability depends on networks, signals, and amplification. The value is migrating from creation to curation and distribution. In practical terms, every additional AI-generated article makes it harder for any single article to be noticed.
The supply curve has shifted outward dramatically. Demand hasn’t. Human attention remains finite. As supply approaches infinity and attention remains fixed, the probability of being found declines.
Being found is now an economic problem of scarcity rather than a technical exercise in optimization. When production is abundant, attention is scarce. When attention is scarce, distinctiveness and distribution become currency.
This is where Rory Sutherland’s concept of powerful messaging becomes essential for us. In his book, “Alchemy,” he argues that rational behavior conveys limited meaning.
When everything is optimized, efficient, and frictionless, nothing signals importance. Powerful messages must contain elements of absurdity, illogicality, costliness, inefficiency, scarcity, difficulty, or extravagance — qualities that serve as signals. They tell the market that something matters.
Consider a wedding invitation. The rational option is an email — instant, free, and efficient. Yet most couples choose heavy paper, embossed type, textured envelopes, even wax seals. The cost and inefficiency are the point. They signal commitment and create emotional weight. The medium amplifies the meaning.
The same logic applies to marketing. When everyone can publish a competent article in seconds, competence carries no signal. A 1,000-word blog post answering a known question communicates efficiency, not importance. Scarcity and effort change perception.
MrBeast built early fame by counting to extreme numbers on camera. The act was irrational. It was inefficient and difficult. That difficulty was the hook. It signaled commitment and created memorability. The content spread not because it was informational, but because it was remarkable.
In an AI-saturated environment, rational content becomes invisible. If 10,000 companies publish summaries of the same topic, none stand out.
But if one brand commissions original research, prints a limited run of a physical report, hosts a live event around the findings, and strategically distributes it, the signal is different. The effort itself becomes part of the message.
Scarcity also changes economics. Sherwin Rosen’s work on the economics of superstars demonstrated that small differences in recognition can lead to disproportionate returns because markets reward the most recognized participants disproportionately.
Moving from being chosen 1% of the time to 2% can double outcomes because fame compounds. In crowded markets, the most recognized option captures an outsized share and reinforces its own dominance.
This is why being found is fundamentally different now. In the past, discoverability was a function of production and optimization. Today, it hinges on distinctiveness and signal strength. When production approaches zero cost, attention becomes the only scarce resource, which means you should be aiming for fame rather than optimization.
Paul Feldwick, in “Why Does The Pedlar Sing?” argues that fame is built through four components:
The offer must be interesting and appealing.
It must reach large audiences.
It must be distinctive and memorable.
The public and media must engage voluntarily.
These four elements provide a practical framework for content marketing in an AI era. Here’s how that works in practice.
Create something interesting
You must create new information, not restate existing information. That could mean:
Proprietary data studies.
Original research.
Indexes updated annually.
Experiments conducted publicly.
Tools that solve real problems.
Physical artifacts with limited distribution.
Events that convene a specific community.
Consider the origins of the Michelin Guide. A tire company created a restaurant guide that became a cultural authority.
Awards ceremonies, industry rankings, annual reports, and indexes all function as content marketing. These are fame engines.
The key is the perception of effort and distinctiveness. A limited-edition printed book sent to 100 target prospects can carry more weight than 1,000 blog posts. Costliness signals meaning.
Reach mass or concentrated influence
Interest without distribution is invisible. Distribution options include:
Media coverage.
Partnerships.
Paid advertising.
Events.
Webinars.
Physical mail.
Community amplification.
If you lack a budget, focus on the smallest viable market. Concentrate on a defined audience and saturate it.
Many iconic technology companies began by dominating narrow communities before expanding outward. Public relations and content marketing converge here.
Earned media multiplies reach.
Paid media accelerates it.
Community activation sustains it.
If your content is never placed intentionally in front of people, it can’t build fame.
Be distinctive and memorable
SEO content historically failed on distinctiveness. Ten articles answering the same question looked interchangeable. But in an AI era, repetition disappears into the model.
Distinctiveness can come from:
A recurring annual report with a recognizable format.
A proprietary scoring system.
A unique visual identity.
A specific tone.
A tool that becomes habitual.
An award or certification owned by your brand.
Memorability drives mental availability. Fluency increases recall. When someone recognizes your brand instantly, you reduce cognitive effort. Repetition of distinctive assets compounds over time.
You have to continually go to market with distinctive, memorable content. If you don’t do this, you will fade in memory and distinctiveness.
Enable voluntary engagement
You can’t force people to share, but you can design for shareability. Content spreads when it carries social currency, enhances the sharer’s identity, rewards participation, and makes access feel exclusive.
Referral loops, limited access programs, community recognition, and public acknowledgment can all increase spread. The key is that the message must move freely between humans. It must be portable, discussable, and referencable.
Memetics matters. If it can’t be passed along, it can’t compound.
If content must be designed for distinctiveness, distribution, and voluntary engagement, search leaders need a different playbook. Here’s a five-step framework.
Step 1: Separate infrastructure from fame
Maintain search infrastructure for high-intent queries, optimize product pages, support conversion, and provide clear answers where necessary. But stop confusing informational volume with brand growth.
Audit your content portfolio. Identify what builds mental availability and what merely fills space to reduce waste.
Step 2: Invest in originality
Allocate budget to proprietary research, data collection, and creative initiatives. If everyone can generate competent summaries, originality becomes leverage.
This may require shifting the budget from content volume to creative depth.
Step 3: Design for distribution first
Before creating content, define distribution.
Who needs to see this?
How will it reach them?
Which gatekeepers matter?
What media outlets might care?
Reverse engineer reach.
Step 4: Build distinctive assets
Create repeatable formats that become associated with your brand.
An annual index.
A recurring event.
A recognizable report structure.
A named methodology.
Consistency builds fluency.
Step 5: Measure fame
Track:
Brand search volume.
Direct traffic growth.
Share of voice in media.
Unaided awareness, where possible.
Traffic alone is insufficient.
If content doesn’t increase the probability that someone thinks of you in a buying moment, it’s not performing its primary job.
We’re entering a period where automation handles the average, freeing humans to focus on the exceptional. The future of content marketing isn’t high-volume AI-generated articles. It’s the creation of new information, new experiences, new events, and new signals that machines can’t fabricate credibly.
It requires a partnership with PR, a strategic use of physical and digital channels, disciplined distribution, and a commitment to fame. Budgets will need to shift from volume production to creative impact.
In a world where information is infinite and attention is finite, the brands that win will be those that understand that being found is more valuable than being published. Content marketing in the AI era isn’t about producing more. It’s about becoming known.
What do conversion rate optimization (CRO) and findability look like for an AI agent versus a human, and how different do your strategies really need to be?
More and more marketers are embracing the agentic web, and discovery increasingly happens through AI-powered experiences. That raises a fair question: what does CRO and findability look like for an AI agent compared with a human?
Several considerations matter, but the core takeaway is clear: serving people supports AI findability. AI systems are designed to surface useful, grounded information for people. Technical mechanics still matter, but you don’t need entirely different strategies to be findable or to improve CRO for AI versus humans.
What CRO looks like beyond the website
If a consumer does business directly through an agent or an AI assistant, your business needs to make the right information available in a way that can be understood and used. Your products or services need to be represented through clean, well-structured data, with information formatted in ways that downstream systems can process reliably.
As more people explore doing business with AI assistants, part of the work involves making sure your products and services can connect cleanly. Standards, such as Model Context Protocol (MCP), can help by enabling agents to interact with shared sources of information.
In many cases, a human may still decide to engage directly on a brand’s site. In that context, content and formatting choices matter. Whether you focus on paid media or organic, ensuring your humans can take desired actions — and will want to — is important.
Old‑school SEO encouraged the idea that more keywords and larger walls of text would perform better. That approach no longer holds.
Wayfair does a great job using accessible fonts, a call to action when the user shifts to a transactional mindset, and easy-to-understand language.
Both humans and AI systems tend to work better with clearly structured, modular content. Large blocks of uninterrupted text can be harder for people to scan and understand. Clear sections, spacing, layout, and visual hierarchy help users quickly understand what they can do and how to accomplish the goal that brought them to the page.
There’s no fixed minimum or maximum amount of text that works best. You should use the amount of content needed to clearly explain what you offer, why it’s useful, and what sets it apart.
A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
Visual components can be helpful when paired with useful alt text. Lead gen forms should be easy for humans to complete and regularly audited for spam or friction. Content that’s hard for people to use is also harder for automated systems to interpret as helpful or relevant.
Optimization 2: How are you communicating with your humans?
One of the best ways to communicate clearly to systems is to communicate clearly to people. Lean into what makes you an expert, but avoid unnecessary jargon or overly complex language. Descriptions should stay specific, accurate, and on-brand.
A simple gut check: if a 10-year-old couldn’t broadly understand what you do, why it matters, and how to engage with you, you’re probably making things harder than necessary. Even though AI systems are sophisticated, clarity still matters because the goal is ultimately to support a human outcome.
If you’re unsure, try putting your positioning copy into an AI assistant and asking it to critique its clarity. Ask for simplification and clearer explanations, not for new claims or embellishment.
Visual components matter here as well. Comparison tables can help when they genuinely support understanding, but they can hurt when they’re used as a gimmick rather than a guide. Accessibility principles matter, too. Color contrast, readable font sizes, and restrained font choices reduce the risk that someone can’t process your site.
IAMS has a thoughtful quiz to find the right dog breed and offers additional close matches. High-contrast color, easy-to-understand buttons, and high-quality photos help.
Images should be easy to understand and clearly connected to the surrounding text. Alt text helps people using assistive technologies and reinforces the relationship between visuals and written content.
A user comes to your site to do something. They might want to buy, request a quote, or speak with your team. That action should be clear.
When the intended action is unclear, it becomes harder for both people and automated systems to understand what your site enables.
Tarte Cosmetics does a great job of leaning into CRO principles, including inclusivity, accessibility, and social proof.
Shopping experiences tend to surface in conversations with shopping intent because assistants are trying to complete the task they were given. If it’s unclear how to add an item to a cart or complete a purchase, you make it harder for a human to do business with you. You also make it harder for systems to understand that you’re a transactional site rather than a catalog of items without a clear path forward.
Lead generation requires similar clarity. If the goal is to talk to your team, include a phone number that can be clicked to call. You might also include a form that submits directly into your lead system or a flow that opens an email client. Forcing users through multiple form pages often frustrates people and adds unnecessary complexity to the experience.
I cover technical considerations last for a reason. The most important work you can do is support the humans you serve. Technical improvements help, but they rarely succeed on their own.
Tips from the Microsoft AI guidebook. (Disclosure: I’m the Ads Liaison at Microsoft Advertising.)
Excessive imagery, low contrast between text and background, or unstable layouts can create challenges.
Make sure your site renders consistently and meaningfully. Large layout shifts after load, measured in cumulative layout shift (CLS), can frustrate users. Pages overloaded with ads or pop-ups can distract from the reason someone arrived in the first place and may introduce trust concerns.
Security matters as well. Malware warnings, broken rendering, or incomplete page loads can raise red flags for both users and automated systems.
Tools like IndexNow can help notify search systems of content changes more quickly. Microsoft Clarity is a free tool that shows how users behave on your site, surfacing friction you might otherwise miss. This includes Brand Agents that help your humans have more meaningful chatbot experiences.
One useful check is to review how your site appears when used as input for ad platforms or auto-generated creative tools, such as Performance Max campaigns or audience ads.
These can provide a helpful lens into how platforms interpret your content. When the resulting positioning and creative align with what you intend, you’re usually doing a good job serving both crawlers and people. When they don’t, it’s often a signal to revisit clarity, structure, or user flow.
Google is rolling out Video Reach Campaign (VRC) Non-Skip ads, expanding how brands reach connected TV audiences on YouTube.
What’s happening. VRC Non-Skips are now live globally in Google Ads and Display & Video 360. Built for the living room experience, they run as non-skippable placements optimized for connected TV (CTV) screens.
Why we care. YouTube has been the No. 1 streaming platform in the U.S. for three straight years, making the TV screen a critical battleground for your brand budget. With guaranteed, non-skippable delivery, you can ensure your full message reaches viewers in premium, lean-back environments.
AI in the mix. Google AI dynamically optimizes across 6-second bumper ads, 15-second standard spots, and 30-second CTV-only non-skippable formats. Instead of manually splitting your budget by format, you can rely on AI to allocate impressions for maximum reach and efficiency.
Bottom line. Advertisers now have a simpler way to secure guaranteed, full-message delivery on the biggest screen in the house — using AI to maximize reach and efficiency across non-skippable formats without manually managing the mix.
Google is expanding its recurring billing policy to allow certified U.S. online pharmacies to promote prescription drugs with subscriptions and bundled services.
What’s happening. Certified merchants can now offer:
Prescription drug subscriptions — recurring billing for prescription medications.
Prescription drug bundles — combining drugs with services like coaching or treatment programs, as long as the drug is the primary product.
Prescription drug consultation services — recurring consults to determine prescription eligibility, either standalone or bundled with medications.
Requirements for eligibility. Merchants must maintain certified status, submit subscription costs in Merchant Center using the [subscription_cost] attribute, include clear terms and transparent fees on landing pages, and comply with all existing Healthcare & Medicine and recurring billing policies. Accounts previously disapproved can request a review once requirements are met.
Why we care. The update opens new revenue opportunities for online pharmacies, letting them leverage recurring models and bundled services while staying compliant with Google policies.
The bottom line. Certified U.S. online pharmacies can now run recurring prescription and bundled offers, giving them more flexibility to reach patients and scale subscription-based services.
Google updated both its image SEO best practices and Google Discover help documents to clarify that Google uses both schema.org markup and the og:image meta tag as sources when determining image thumbnails in Google Search and Discover.
Image SEO best practices. Google added a new section to the image SEO best practices help document named Specify a preferred image with metadata. In that section, Google wrote:
“Google’s selection of an image preview is completely automated and takes into account a number of different sources to select which image on a given page is shown on Google (for example, a text result image or the preview image in Discover).”
Here is how you influence the thumbnails Google chooses:
Specify the schema.org primaryImageOfPage property with a URL or ImageObject.
Or specify an image URL or ImageObject property and attach it to the main entity (using the schema.org mainEntity or mainEntityOfPage properties)
Here are the overall best practices when choosing these methods:
Choose an image that’s relevant and representative of the page.
Avoid using a generic image (for example, your site logo) or an image with text in the schema.org markup or og:imagemeta tag.
Avoid using an image with an extreme aspect ratio (such as images that are too narrow or overly wide).
Use a high resolution, if possible.
Google Discover image selection. In the Discover documentation Google added a section that reads:
“Include compelling, high-quality images in your content that are relevant, especially large images that are more likely to generate visits from Discover. We recommend using images that meet the following specifications: At least 1200 px wide, High resolution (at least 300K) and 16×9 aspect ratio”
“Google tries to automatically crop the image for use in Discover. If you choose to crop your images yourself, be sure your images are well-cropped and positioned for landscape usage, and avoid automatically applying an aspect ratio. For example, if you crop a vertical image into 16×9 aspect ratio, be sure the important details are included in the cropped version that you specify in the og:image meta tag).”
“Use either schema.org markup or the og:image meta tag to specify a large image that’s relevant and representative of the web page, as this can influence which image is chosen as the thumbnail in Discover. Learn more about how to specify your preferred image. Avoid using generic images (for example, your site logo) in the schema.org markup or og:image meta tag. Avoid using images with text in the schema.org markup or og:image meta tag.”
Why we care. Images can have a big impact on click-through rates from both Google Search and Google Discover. Here, Google is telling us ways we can encourage Google to select a specific image for that thumbnail. So review these help documents and see if any of this can help you with the images Google selects in Search and Discover.
If you’re not actively managing your branded search campaigns, you’re leaving money on the table and your reputation in the hands of competitors, review aggregators, and affiliate marketers.
Brand protection through PPC isn’t just about bidding on your own name. It’s a strategy that spans defensive bidding, query monitoring, ad copy testing, and reputation management across the entire customer research journey.
Why brand search deserves more than basic defense
Most PPC managers treat brand campaigns as an afterthought. Set up a campaign, bid on the exact brand name, maybe add some close variants, and call it done.
But the reality is far more complex, especially when we’re talking about bigger, well-known brands. Your brand exists across dozens of query contexts, each representing a different stage of the customer journey and requiring a different strategic approach.
Consider what happens when someone searches for your brand. They’re not just typing your company name, they’re asking questions, seeking validation, comparing alternatives, and researching specific features.
If you’re only covering exact-match brand terms, you’re missing the majority of brand-related searches and leaving those high-intent users exposed to competitor messaging.
Third-party sites like review aggregators and affiliate comparison websites actively bid on your brand terms to capture traffic and redirect it to their comparison pages, where your competitors pay for prominence.
The cost? Your brand equity, customer trust, and ultimately, conversion rates.
4 categories of branded searches you need to cover
Based on user intent and competitive vulnerability, branded searches fall into four strategic categories. Each requires different bid strategies, ad copy approaches, and landing page experiences.
Let’s break down each category and the specific PPC tactics that can work.
Brand trust and reputation queries
“Is [Brand] good?”
“[Brand] reviews.”
“Is [Brand] legit?”
“Is [Brand] worth it?”
These searchers are in the validation phase. They’ve heard of your brand but want social proof before committing.
The competitive threat here comes from review aggregators and affiliate sites that will happily show your reviews alongside competitor CTAs.
PPC strategy
Bid aggressively — these are high-intent users who are close to converting.
Use review extensions and star ratings in your ads.
Highlight trust signals in ad copy (years in business, customer count, awards).
Send users to dedicated testimonial or case study landing pages, not your homepage.
Test callout extensions with specific proof points.
Product features queries
“What is [Brand] known for?”
“Pros and cons of [Brand].”
“Does [Brand] offer [feature]?”
Users searching for feature-specific information are evaluating whether your solution meets their requirements. Competitors often bid on these queries with ads suggesting they offer superior features.
PPC strategy
Create feature-specific ad groups with tailored ad copy.
Use sitelink extensions to direct users to specific feature pages.
Address the specific feature in headline 1, don’t waste space on your brand name.
Include feature demos or video on the landing page.
Test whether these queries warrant higher bids than core brand terms.
Comparison queries
“Alternatives to [Brand].”
“How does [Brand] compare?”
“Is [Brand] better than [Competitor]?”
“Is [Brand] right for [use case]?”
This is the most competitive category. Users are actively comparing you to alternatives, and both direct competitors and third-party comparison sites are bidding heavily. This is where you’re most vulnerable to losing customers who were already considering you.
PPC strategy
Bid at or above top-of-page estimates to maintain Position 1.
Create dedicated comparison landing pages for each major competitor.
Include pricing transparency if it’s a competitive advantage.
Monitor auction insights obsessively to identify new competitive threats.
Consider category-level comparison ads for “best [category] tools/products” searches.
Niche questions
“Is [Brand] expensive?”
“Does [Brand] offer discounts?”
“Is [Brand] secure?”
These queries reveal specific concerns or evaluation criteria. They’re often low-volume but extremely high-intent because they represent genuine decision-making criteria.
PPC strategy
Develop FAQ landing pages that address multiple related concerns.
Test lower bids — these queries often have less competition.
Use search query reports to identify emerging concerns and address them proactively.
The traditional single-brand campaign approach doesn’t give you enough control or insight at scale. Instead, structure your brand defense across four specialized campaigns, each targeting different intent signals and requiring distinct bid strategies.
Core brand defense
This covers exact-match brand terms and common misspellings with aggressive bidding to maintain 95%+ impression share and top positions. Never let this campaign be budget-limited.
Use multiple RSAs to test different value propositions. Monitor lost impression share due to rank as your primary competitive threat indicator.
Brand + category
Capture phrase-match queries like “[Brand] CRM” or “[Brand] for [use case],” where users are researching you within a specific product context.
Bid slightly lower than core brand terms, but ensure ad copy acknowledges the category and emphasizes your category leadership. Test whether category-specific landing pages outperform your homepage for these queries.
Brand reputation and reviews
Theseintercept validation-phase users searching “[Brand] reviews,” “[Brand] ratings,” or “is [Brand] good” before they click through to third-party aggregators. Bid aggressively here — these comparison-shopping clicks are worth more than core brand searches.
Use review extensions prominently, include specific social proof metrics in ad copy (4.8 stars, 10,000+ reviews), and send traffic to dedicated testimonial pages rather than your homepage. Test video testimonials on landing pages.
Competitive comparison defense
Control the narrative for queries like “[Brand] vs [Competitor],” “[Brand] alternative,” or “better than [Brand].” These are users you’re at risk of losing, so pay up to your maximum acceptable CPA.
Create unique landing pages for each major competitor with honest comparisons that emphasize your advantages, include side-by-side feature tables, and offer special conversion incentives like extended trials or migration assistance.
Defensive tactics against third-party aggregators
Sites like G2, Capterra, and other affiliate comparison sites actively bid on your brand terms without violating trademark policy because they legitimately have content about your brand.
But they’re siphoning off your traffic and often presenting biased or incomplete information. Your defense requires three coordinated approaches.
Bid aggressively on review keywords
Review aggregators bid heavily on “[Brand] reviews” and “[Brand] ratings” because these are their money keywords, so you need to bid even higher.
Run the math: If a review aggregator click costs you $3 but sends that user to a page where your competitor’s ad costs $50, you’re getting a deal at $10 per click on your own review keywords.
Calculate the lifetime value of a customer versus the cost of letting them click to a third-party site where competitors can advertise. Also, keep in mind it’s cheaper for you to bid on your own brand than for competitors to outbid you.
Claim and optimize your profiles on major review platforms you want to work with
Even if you can’t prevent them from bidding on your brand, ensure that when users click through, they see optimized content, strong ratings, and an active presence with responses to reviews.
Many review platforms offer advertising options — test running ads on your own profile pages to capture users who arrive via organic search or competitor ads.
Build dedicated testimonial and customer story pages
Make yours more compelling than third-party review aggregators. Include video testimonials, detailed case studies with metrics, filterable reviews by industry or use case, and verified customer badges.
Then use your PPC ads to drive users to these owned properties instead of letting them discover review aggregators organically.
Your brand campaign ad copy needs to do more than confirm your brand name. It needs to preempt objections, differentiate from competitors, and provide compelling reasons to click your ad instead of a competitor’s or third-party site. Three frameworks deliver results.
The preemptive strike
Identify the top 3-5 objections that come up in your sales process and address them directly in your ad copy before users encounter them on competitor or review sites.
If implementation time is a concern, use “Live in 5 days, not 5 months.”
If pricing is opaque, try “Transparent pricing, no hidden fees.”
If enterprise readiness is questioned, lead with “Trusted by 500+ enterprise customers.”
If ease of use is a concern, emphasize “No training required, start today.”
The competitive differentiator
Don’t just state features, state features your competitors don’t have or can’t match. This is especially critical for comparison queries where you know competitors are showing ads. Examples include:
“Only platform with native [unique integration].”
“Industry’s fastest performance, verified by [third party].”
If you can’t identify any unique features or USPs, that’s a signal to improve your product positioning or capabilities. Without clear differentiation, PPC alone won’t drive sustainable conversions.
Social proof stacking
Combine multiple types of social proof to build credibility quickly. Don’t just pick one element, stack them. Try
“4.8 stars from 10,000+ reviews. G2 leader 5 years running.”
“Join 50,000+ companies. Featured in Forbes and TechCrunch.”
“Winner: Best [category] 2025. 98% customer satisfaction.”
Sending all brand traffic to your homepage is a missed opportunity. Different branded queries represent different user intents and concerns, and your landing pages should address those specific intents.
Feature-specific pages
When users search “[Brand] + [feature],” send them to dedicated pages that explain the feature in detail, show it in action, and provide clear next steps.
Include a hero section explaining the feature in one sentence, a video demo or animated screenshot, technical specifications for enterprise buyers, integration details if relevant, and customer examples using this specific feature.
Comparison pages
Create dedicated comparison landing pages for each major competitor. Be honest about differences while emphasizing your advantages. Include side-by-side feature tables, pricing comparisons if advantageous, and customer testimonials from switchers.
Acknowledge competitor strengths without being dismissive, highlight 3-5 key differentiators where you excel, and offer migration assistance or switch incentives. Make your CTA clear and prominent, offering a trial or demo.
Trust and validation pages
For review and reputation queries, create dedicated pages that aggregate social proof rather than linking to your G2 profile or hoping users browse scattered testimonials.
Display aggregate ratings prominently (average of G2, Capterra, etc.), place video testimonials above the fold, show recent reviews with verified badges, make reviews filterable by industry, company size, and use case, include case studies with concrete metrics, and highlight third-party awards and recognition.
Monitoring and optimization: The ongoing battle
Brand protection isn’t a set-it-and-forget-it strategy. The competitive landscape constantly evolves, new competitors emerge, third-party sites adjust their strategies, and user search behavior shifts. You need systematic monitoring and rapid response capabilities across three time horizons.
Weekly monitoring
Review:
Search term reports to identify new query patterns.
Auction insights for increased competitor presence.
Impression share metrics to diagnose declining performance.
Lost impression share breakdowns by budget and rank.
Manual searches of your top 10 brand queries to see what ads are showing.
Quality score checks for brand keywords to diagnose landing page or ad relevance issues.
Monthly deep dives
Analyze conversion paths to understand how brand search fits into the broader customer journey.
Review assisted conversions since brand campaigns often contribute to non-brand conversions.
Audit landing pages for relevance and conversion performance.
Gather competitive intelligence on what landing pages competitors use for brand conquesting.
Test new ad copy variations focused on emerging objections or competitive threats.
Analyze search impression share by device and location to identify gaps.
Quarterly strategic reviews
Audit your complete branded query coverage to identify missing categories or query types.
Assess whether your coverage across the four query categories remains comprehensive.
Conduct competitive conquest analysis to determine which competitors most aggressively target your brand.
Evaluate ROI of different brand campaign types to optimize budget allocation.
Review third-party aggregator presence for new sites bidding on your brand.
Advanced tactics for sophisticated brand protection
Dynamic keyword insertion
For validation queries like “is [Brand] good” or “does [Brand] work,” use dynamic keyword insertion to echo the user’s specific question in your ad copy, creating higher relevance and click-through rates. Try headlines like “Yes, {KeyWord:[Brand]} Is Excellent” or “Absolutely, {KeyWord:[Brand]} Works.”
Geo-modified campaigns
If you have location-specific offerings or competitors vary by geography, create geo-modified brand campaigns. Users searching “[Brand] New York” or “[Brand] enterprise” may have different needs than general brand searchers.
Audience layering
Apply audience segments to brand campaigns to adjust bids based on user quality. Users who’ve visited your pricing page before should get higher bids on brand searches than first-time visitors. Similarly, prioritize users who match your ideal customer profile demographics.
Trademark enforcement
While Google generally allows competitors to bid on your brand terms, using your trademarked brand name in their ad copy is often prohibited.
Monitor competitor ads and file trademark complaints when they use your brand name in headlines or descriptions. This is particularly effective against smaller competitors and affiliates who may not realize they’re violating policy.
Problem/solution queries
Capture queries where users are researching whether your solution addresses a specific problem. These are often high-intent and represent clear use case alignment.
Target queries like:
“[Brand] for [problem].”
“How to [solve problem] with [Brand].”
“[Brand] [use case] solution.”
“Can [Brand] help with [challenge].”
Budget allocation and ROI considerations
How much should you invest in brand protection versus acquisition campaigns? The answer depends on three factors:
Competitive pressure.
Brand strength.
Customer lifetime value.
If you operate in a highly competitive category where multiple well-funded competitors actively bid on your brand terms, invest more in brand protection. Run auction insights weekly to monthly to quantify competitive presence.
If competitors show in 40% or more of your brand auctions, this is a high-threat environment requiring aggressive defense. Stronger brands with dominant organic presence can afford to spend less on core brand defense because their organic listings provide natural protection. This doesn’t apply to reputation and comparison queries where third-party sites rank organically.
High LTV businesses should invest more aggressively in brand protection because the cost of losing a customer to a competitor or having them influenced by negative review sites is substantial. If your average customer is worth $50,000 over their lifetime, paying $50 per click to defend against comparison queries is economically rational.
For most B2B SaaS and high-consideration products, allocate approximately 15-25% of total paid search budget to comprehensive brand protection. Within that allocation, dedicate 40% to core brand defense (exact match), 25% to competitive comparison defense, 20% to reputation and review queries, and 15% to feature and niche question queries.
Brand protection through PPC isn’t just defensive marketing. It’s a competitive moat. When you control the narrative across branded search contexts, you ensure high-intent users see accurate information instead of competitor ads or third-party pages monetizing your brand equity.
The brands that win treat this as strategy, not maintenance. They segment branded queries by intent, build landing pages to match, monitor threats continuously, and defend high-value search real estate aggressively.
Start with an audit using the four-category framework. Close coverage gaps, align campaigns and landing pages to intent, and commit to weekly monitoring, monthly optimization, and quarterly strategic reviews.
If you don’t own your branded searches, someone else will.
If your brand’s content arm has been active for a few years, I’m guessing you have plenty of material that can be revised to help you show up more prominently in AI search answers — we’ll call this AEO throughout the article.
I’m getting bombarded with brand marketers’ questions about how to get AEO traction these days. “Revise your old content” is a favorite answer that often produces an “aha” moment for the other party, possibly because the nature of AEO is so forward-looking.
That answer sparks a few important follow-up questions I’ll tackle below.
How do you reformat content for better AEO performance?
I like to lean on three principles when I tackle content reformatting. Optimizing for:
Topical breadth and depth.
Chunk-level retrieval.
Answer synthesis.
Here’s what that means in practice.
Optimize for topical breadth and depth
Structure your site using a hub-and-spoke model. For each primary category or keyword theme, build a comprehensive hub page that introduces the broader topic and links out to supporting spoke pages that dive deeper into specific facets.
Each spoke page should focus on one clear angle and develop it thoroughly enough to establish distinct purpose and query intent. Because user questions branch in different directions, covering multiple angles helps expand your overall topical reach.
Link related spoke pages to one another where it makes sense, and consistently back to the hub as the central reference point. This reinforces how your content connects and gives AI systems clearer signals about the relationships between topics.
Don’t rely on using the whole page for context. Each chunk should be independently understandable.
Keep passages semantically tight and self-contained. Use one idea per section and keep each passage tightly focused on a single concept — as Our Family Wizard did here:
Optimize for answer synthesis
Summarize complex ideas clearly, then expand with a clearly structured “Summary” or “Key takeaways”. Start answers with a direct, concise sentence. Favor a plain, factual, non-promotional tone.
This formatting, from Baseten, puts an easily digested TL;DR right at the top of a post explaining AI inference:
Start with the premise that AI readability is about clarity, not gimmicks, and this approach has tons of appeal to humans looking to quickly understand the content they’re consuming.
AI systems favor content where:
Answers are named, not inferred.
Sections have clear intent.
Key points are easy to lift without rewriting.
That often means being more explicit than traditional SEO ever required — defining terms directly, summarizing sections, and stating conclusions early. It’s kind of the opposite of keyword-stuffed content that’s overwritten to hit assumed “preferences” the Google algorithm might have for content length.
The only real hesitation I have is that content generated by AI may oversimplify nuance. Not every page should be optimized for a single atomic answer, and strategic or opinionated content still benefits from narrative flow.
I try to strike a balance by:
Explaining first, then elaborating.
Labeling insights, then proving them.
Making the answer obvious before adding sophistication.
When done well, this has appeal for both AI and humans.
Now, all of that said, LLM-produced content — just check out your LinkedIn feed if you need examples — very quickly became recognizable as exactly what it is: AI-produced content that’s easily consumed by AI models.
The effect can be very off-putting depending on the reader, even if your content, as it should always strive to do, includes original POVs, research, and or data that the LLMs couldn’t possibly find from existing content.
Keep a close eye out for AI tells, the dreaded em dash, squished vertical line spacing, a bullet-point list featuring emojis, sentence structures like “It’s not just [X]. It’s also [Y].” or “It’s more than [A]. It’s [B].” and removing them wherever you see them.
For AEO, prioritization is less about traffic, which is where a lot of SEO marketers stop KPI-wise, and more about answer value.
I start by identifying content that:
Contains clear expertise or proprietary insight, which LLMs love.
Answers questions people ask repeatedly but doesn’t state the answer cleanly.
Is already referenced internally by sales, support, or customers as “explainer” material.
Also worth noting: Is the content focusing on one of our core products or services, even indirectly? That’s fundamental. Visibility for visibility’s sake isn’t worth much, so make sure it’s got a natural tie-in to pipeline or revenue growth.
As far as types of content to prioritize, reports, tools, and evergreen guides tend to rise to the top because they already contain structured thinking, if not structured answers. AI systems don’t reward originality embedded in prose. They reward explicit conclusions, definitions, and frameworks.
Here’s my simple AEO prioritization test:
Can an AI model confidently quote or summarize this page as is?
Would it know what question this page answers within the first few seconds?
Are the key takeaways labeled or implied?
If the answers are “no,” and the theme of the content is important to your business growth, that content is a strong reformatting candidate.
How do you approach metadata when revising content for AEO?
Before I dive into the how, I’ll mention that these elements have a different function for AEO than they do for SEO. In SEO, they function as ranking levers. In AEO, they serve more as context anchors.
Let’s break down each key element of metadata and show how that difference should play out.
Title tags
Title tags serve as the topic of the page for traditional SEO. For AEO, make them more descriptive about the page’s primary answer or function.
So a title tag that reads “Session replay software” for SEO purposes could be rewritten for AEO to say “Session replay: what it is, when to use it, and when not to use it.” Title tags with more context give AI systems clearer signals about how and when to cite the content.
Headings (H1-H3)
In traditional SEO, header tags have been used to identify categories, for example, “compliance monitoring.”
In AEO, I use them to map to specific questions or claims. Possible updated versions of the above would be:
What is compliance monitoring?
Why does compliance monitoring matter for companies in {x} vertical?
Common issues caused by a lack of compliance monitoring
When should a CTO invest in compliance monitoring?
To stress-test your header tags, try answering them. If it takes you more than a few sentences to answer your question or prove your assertion clearly and persuasively, it’s probably the wrong question and not one a user is going to type into ChatGPT.
Meta descriptions
Meta descriptions are those chunks of expanded text that might or might not be pulled into the SERP in traditional SEO, but do serve to explain more about the content. In AEO, they act as a compressed intent signal. AI systems, like the SERPs, may choose not to quote the meta description, but good ones help reinforce:
Who the content is for.
What problem it resolves.
How it should be framed.
Through the AEO lens, I look at meta descriptions as a one-sentence briefing note for both users and LLMs.
What changes — and what doesn’t — in the shift to AEO
You may have noticed a theme here — while, in general, what’s good for SEO is what’s good for AEO, there are material differences in the two disciplines. Knowing what they are and how to adapt accordingly can pay off in AI search visibility.
I’m not arguing that your content strategy or themes should pivot. But knowing that AI models read and ingest content differently than more traditional SEO algorithms is important and should be factored into the way you’re repurposing your evergreen work from months and years past.