If you rank your own product #1 in “best of” listicles, it’s not just a search-quality issue — it may violate FTC rules that took effect in October 2024.
Driving the news. As Lily Ray noted on LinkedIn, the FTC’s Consumer Review Rule (16 CFR Part 465) prohibits several deceptive practices tied to reviews and testimonials, including:
Presenting company-controlled content as independent reviews.
Publishing reviews of products or services never actually used.
Attributing reviews to people who didn’t write them.
Penalties can reach up to $53,088 per violation, and each page may count separately. Ray also shared a reference table she generated with the help of Claude:
Why now. “Best X” and “Top 10 Y” listicles have surged as a GEO tactic over the past couple of years. These pages often perform well in search and increasingly influence AI-generated answers.
The backstory. Before the rule was formalized, Ray said at least one company faced legal action for publishing hundreds of “best of” pages that:
Ranked its own services #1.
Included fabricated competitor reviews.
Used fake reviews on third-party platforms.
The Better Business Bureau later censured the company for unsubstantiated claims.
What’s happening. Many modern listicles follow a similar pattern:
A brand publishes a “best tools” list.
Includes competitors it hasn’t tested.
Uses subjective or invented scoring systems.
Ranks itself #1.
These listicles may imply independence or firsthand evaluation when neither exists.
The nuance. You can publish comparison content that includes your own product. However, based on FTC guidance, risk increases when:
You imply objectivity, but promote your own product.
You present reviews not based on real experience.
You fail to clearly disclose material relationships.
What Google is saying. Google is aware of the low-quality listicle trend. In a statement to The Verge, a Google spokesperson said the company applies protections against manipulation in Search and Gemini, and reiterated its guidance: create content for people and ensure it’s understandable to search systems.
Why we care. What has worked as a visibility tactic may carry risk on two fronts — regulators and a potential Google Search algorithm change. That means this popular GEO tactic could decline quickly as its effectiveness drops.
Caveat. I’m not a lawyer. Consult your own legal counsel if you’re concerned about using this tactic.
Human-written content dominates Google’s top rankings, appearing in the No. 1 position 80% of the time versus just 9% for purely AI-generated pages, based on a Semrush analysis of 42,000 blog posts.
The details. Semrush analyzed 20,000 keywords and their top 10 results, classifying content with an AI detector.
Human-written pages outperformed AI and mixed content across all top 10 positions.
The gap was widest at Position 1, where human content was 8x more likely to rank.
AI content appeared more often lower on Page 1, nearly doubling from Positions 1 to 4.
Yes, but. AI detection tools are widely known to be inconsistent and can misclassify human and AI-written content, creating some possible “fuzziness” in these classifications.
Why we care. AI-generated content works, until it doesn’t. Yes, AI can help you rank, but this data suggests human insight still drives the best performance. For competitive queries, originality, expertise, and editorial judgment remain your unfair advantages.
Perception vs. data. 72% of SEOs said AI content performs as well as or better than human content, yet ranking data showed a clear human advantage at the top.
How teams use AI. No surprise, AI is widely adopted and often used in a hybrid approach:
87% of teams keep humans heavily involved in content creation.
64% use a human-led, AI-assisted workflow.
AI is most common in research, drafting, and optimization.
Use drops sharply for multimedia, localization, and higher-judgment tasks.
What’s driving adoption. AI accelerates output, but doesn’t reliably improve it.
70% cite faster production as AI’s top benefit.
Only 19% say it improves content quality.
About the data: The analysis examined 42,000 blog pages from 200,000 URLs tied to 20,000 keywords, using GPTZero to classify content. It also includes a survey of 224 SEO professionals working in content and search.
In this case study, we went deep instead of broad. We focused on one question: why wasn’t a brand present in a single ChatGPT prompt across ~70 iterations?
We chose one prompt: “What are the best hotels in New York City?”
We analyzed mentions, citations, fanouts, and SERPs in Google and Bing. We also planned to analyze GPT memory, but it made no discernible difference to mentions, citations, or fanouts.
What we did and what we found
We chose NYC hotels because it’s a crowded, mature market with juggernauts and up-and-comers. We also have no connection to the NYC luxury hotel space — we intentionally picked an area where we could stay objective and learn from scratch.
After running the prompt “what are the best hotels in New York City” 68 times, we identified which hotels appeared most consistently and which were nearly invisible.
We chose the Baccarat Hotel as our “client” because it appeared only once (1.5% of the time), despite strong reviews and clear alignment with the prompt’s intent. We wanted to know why — and whether it could change that.
Key findings:
You can dominate query fanouts on Google SERPs and still underperform in ChatGPT brand mentions.
Bing matters most. Ranking in Bing articles for fanouts aligns more directly with ChatGPT mentions — not just citations.
In verticals dominated by third-party content, you face complex digital PR paths to increase visibility.
Note: A full methodology breakdown appears in the appendix.
Mentions of the Baccarat vs. the Fifth Avenue Hotel show just how wide the disparity in ChatGPT visibility can be
The Baccarat Hotel appeared once in 68 trials (1.5%).
Top performers were large luxury hotels like the Four Seasons Hotel New York Downtown.
ChatGPT also identified boutique hotels as a subcategory, generating a secondary list in its answers. Boutique hotels like the Baccarat are typically smaller and not part of large chains.
Within this boutique subcategory, the Baccarat still underperformed. The Fifth Avenue Hotel, the top-performing boutique property, appeared 13 times, cited 20% of the time, versus the Baccarat’s 1.5%.
Reputation can’t explain visibility disparities
We first checked whether anything in the hotel’s history or reputation could explain the gap. As the chart below shows, nothing significant did:
The Baccarat
The Fifth Avenue
Year Founded
2015
2023
Current Price
$930
$563
Number of Google Reviews
1.3k
213
Google Reviews Rating
4.6
4.6
Number of Expedia Reviews
531
201
Expedia Reviews Rating
9.4
9.6
Overall, the Baccarat has been around longer and has more reviews. On quality, the Fifth Avenue Hotel has no edge in Google reviews and only a slight edge in Expedia reviews. The only area where the Baccarat lags is price — but that’s unlikely the issue when The Ritz-Carlton, a consistent non-boutique winner, is listed at $1,100.
Further reinforcing the Fifth Avenue’s underdog status: one of its most prominent Google results (rank 2) was a Wikipedia page for a different Fifth Avenue Hotel that closed in 1908, creating potential entity confusion similar to the two Danny Goodwins.
If the Fifth Avenue Hotel had been the one missing, it would suggest a less established brand with entity confusion. But the opposite happened — it prevailed in ChatGPT.
So what was the problem for the Baccarat Hotel?
Winning Google SERPs for query fanouts doesn’t help, but winning Bing SERPs does
When ChatGPT performs a web search, it sends a series of queries you can extract via Chrome DevTools. In this case study, examples included:
[Best hotels in new york city]
[Top rated luxury hotels in new york city recommendations]
[Best hotels in nyc top luxury and boutique hotels new york]
[Best luxury and boutique hotels in new york city recommendations reviews]
[Best hotels in new york city nyc top hotels]
[Top hotels in nyc luxury boutique best places to stay new york city]
In total, we extracted 25 unique query fanouts.
What we saw in the Google SERPs
If we only looked at the articles dominating fanout SERPs in Google, we’d expect the Baccarat to narrowly outperform the Fifth Avenue in ChatGPT. That didn’t happen.
In the table below, the Baccarat “wins” three of the top 10 most frequently appearing pages, while the Fifth Avenue Hotel “wins” two. The other five feature neither. A “win” means one of the following:
The Baccarat is listed as a “one key” hotel, placing it at the bottom of the list. The Fifth Avenue Hotel is listed as a “two key” hotel, placing it in the middle of the list.
Both mentioned, but the Fifth Avenue much more positively
What we saw in the Bing SERPs
By contrast, looking only at the articles dominating fanout SERPs in Bing, we’d expect the Fifth Avenue to outperform the Baccarat in ChatGPT — and it did.
In the table below, the Fifth Avenue “wins” five of the eight most frequently appearing URLs.
Note: The table includes two fewer URLs because Bing SERPs were slightly less diverse for these fanouts.
Both are listed, but the Fifth Avenue is listed under “Our Top Picks”
https://travel.usnews.com/hotels/new_york_ny/
The Baccarat
The Baccarat is #11 on the list, the Fifth Avenue is #16
The connection between Bing visibility and brand mentions
Bing rank strongly predicts ChatGPT citations — 87% align with Bing’s top results, Seer Interactive found. Our case study supports this and extends it.
We examined the relationship between fanouts (Seer focused on prompts) and brand mentions.
Example mention: “For a luxury boutique feel: listings like The Fifth Avenue Hotel or Crosby Street Hotel consistently make ‘top NYC’ lists from travel editors.”
Mentions are often more valuable than citations. Most people won’t follow citations but will remember the top recommendation.
There’s ongoing debate about whether fanouts shape ChatGPT’s answers and mentions, or simply support answers generated from training data. For example, Leigh McKenzie argued on LinkedIn:
“The citations you see at the bottom? Those are surfaced after the answer is generated, not before. It’s post-hoc rationalization. The model didn’t choose your brand because it found your URL. It generated an answer based on what it already knows, then pointed to sources that support it.”
By contrast, our data aligns with Beehiiv’s research, which suggests citations do shape mentions.
Training data doesn’t appear to be the issue for the Baccarat. Compared to the Fifth Avenue, it’s older, has more reviews, and holds similarly high ratings across major platforms. What it lacks is strong presence in Bing results for fanouts and citations, which appears to lead to fewer mentions.
A simple flow might look like this:
Brand ranks in Bing → ChatGPT fanouts pull in Bing pages → ChatGPT synthesizes training and Bing data to generate mentions
Coda: A tale of two Forbes articles, or why the details matter
In this vertical, third parties like Forbes and Condé Nast control the space. Visibility depends on who mentions you, so you need a strong outreach strategy — not just updates to your own content.
Our data shows that “targeting Forbes” isn’t specific enough.
The top result surfaced in both Bing and ChatGPT was the same Forbes article. In Google, the most frequent fanout result was also a Forbes article — but a different one.
As we’ve seen, getting into Google’s Forbes article likely wouldn’t provide a meaningful boost. The Baccarat “won” in that piece.
Getting into Bing’s Forbes article, where the Baccarat wasn’t mentioned, could make all the difference. This requires a highly surgical approach grounded in Bing data.
Generalities won’t work; detail reigns supreme.
Appendix: Methodology
Model: We prompted GPT-5.2 Instant and manually extracted results. We didn’t use APIs within ChatGPT.
Number of iterations: We ran the same prompt 68 times.
Prompt: “What are the best hotels in New York City?”
Settings: We tested three memory states:
Saved memories off
Saved memories on, using unrelated real user memories
Saved memories on, with one memory about needing gluten-free travel accommodations
For all trials, we turned off “reference chat history” to avoid interference across iterations.
We expected differences based on memory settings but found none, so we treated all trials as a single dataset.
Is it possible to get an accurate view of the current state of SEO?
There have been multiple attempts to reach consensus on what works, predict what might be coming, and identify the factors that may play a role in “good” (or “bad”) SEO.
As useful and productive as some of this may be, none of it offers the same grounded data as the Web Almanac, a project I was honored to be a part of. With the publication of the 2025 SEO chapter, we can now review the data and spot the emerging trends from 2025 and what that could mean for SEO in 2026.
SEO standards on the rise
2025 has been another year of increasingly higher SEO standards — which can only be a good thing:
Near-universal adoption of HTTPS (now up to 91%+).
Increased use of title tags at nearly 99% adoption, and even viewport meta tags at over 93% adoption.
Canonical adoption rose from 65% in 2024 to 67%+ in 2025.
HTML validity is slowly improving. For example, invalid <head> elements dropped to 10.1% on desktop and 10.3% on mobile from 10.6% and 10.9%, respectively, in the previous year.
Robots.txt error rates fell404s declined to 13% from 14% the previous year, and 5xx responses fell to ~0.1%.
Meta robots usage has crept up to 46.2% in 2025 from 45.5% the prior year.
Not all of these statistics represent rapid change, but they do show steady and consistent change, at the very least. The 2025 Web Almanac data presents the web as a more secure and easier-to-crawl place, which is certainly a positive.
So, can SEOs take a victory lap right now? No, as there is more to do in 2026, even if the basics do feel like they’re stable or steadily improving.
Content management systems (CMSs) and SEO plugins play a huge role in developing SEO best practices and cementing the “default” or de facto standards.
As the CMS chapter in the 2025 Web Almanac shows, more and more websites are now powered by a CMS:
Of these, the top five most popular systems over the last four years likely aren’t surprising.
Frequently underpinning many SEO defaults are SEO tools typically utilized by WordPress sites:
That’s not to say that using these platforms or tools ensures a perfect website setup. That said, key elements or functions of these tools can become industry standard due to their ubiquity:
Robots.txt.
Sitemap.xml.
Canonical tags.
Semantic HTML.
Structured data.
Not all of these are on by default. Sometimes they require inputting basic details or simple implementation. Regardless, their ease of access increases the likelihood that they will become an SEO best practice.
This is happening, and it’s proving effective. What this means for 2026 and beyond is that:
Working with or lobbying major platform and tool makers is one of the key ways to shape SEO’s future direction.
SEO tools and platforms will continue to enforce best practices on the front end, but they could also benefit from AI and assistive features behind the scenes. While it may be less visible in the data itself, these tools offer the opportunity to move quickly and gain deeper insight.
Structured data usage was previously driven by what Google rewarded in the search engine results pages (SERPs). SEOs and plugin developers alike could be inspired to move beyond what’s beneficial for the SERPs and onto what contributes to a more predictable, structured, and retrievable data set.
Deprecated, but not forgotten
Defaults and best practices help, but they don’t finish the job. While attention often shifts to new features, old or forgotten standards still see widespread use.
There have been many different cases where deprecated settings or standards have prominently appeared in the data.
For example, in meta robots bot declarations, “msnbot” is still in the top 5, even though it was replaced over 16 years ago.
AMP use has plummeted over the years, but it’s still found on over 38,000 homepages. While technically not deprecated, amp.dev has seen no recent activity for nearly four years now.
The most common meta robots attributes are “index” and “follow,” which are implicit and largely ignored.
Web changes — no matter how small — are often neither quick nor easy to get done, and we’ll likely see traces of deprecated features and settings in the data for years to come.
More work is needed
The improvement in SEO standards doesn’t apply to all features and sites. There are some that aren’t moving in the same direction:
The mobile performance gap stubbornly lingers — even as it continues to improve.
Duplicate content management is still lagging, with nearly 33% of pages missing canonical implementation.
Advanced configurations have barely moved from the previous year — nearly 67% of images don’t have loading attributes set, and over 91% of iframes don’t have set loading attributes.
Many deprecated standards refuse to go away.
While CMS default settings or configurations can take credit for some of the larger changes, they also bear some of the responsibility for the issues above. For example, median Lighthouse scores for some of the major CMS platforms are still lagging, especially on mobile (while seeing increases over last year).
The long tail of the web is still messy, and this will probably always be the case. The Web Almanac dataset doesn’t exclude websites that are no longer relevant or abandoned.
Site metrics that meet the “top” standards from an SEO best practices point of view can likely be achieved with an out-of-the-box site built on any major CMS with a modern theme and 30 mins of carefully considered configuration. This is one of the most significant opportunities in technical SEO.
In 2026, we’ll likely:
Continue to see performance gaps converge between desktop and mobile experiences — but slowly.
Still be able to see echoes of past markup and decisions. Even if the collective focus is pulled to the “new world” of AI search, many SEOs won’t abandon proven tactics and approaches from past years. This dataset develops slowly.
Observe something that’s mostly “business as usual.”
One of the more eagerly awaited elements of the Web Almanac data was whether we can chart the increasing presence and impact of AI search and crawlers in the decisions of SEOs and developers.
Within the data, we observed two major developments:
Robots.txt is increasingly used more as a policy document rather than crawler control.
Creation and adoption of llms.txt is one of the few signs of LLM-first decision-making.
Commenting on the state of SEO is challenging because the definition isn’t fixed. What’s good or bad practice is often hotly debated, and in the world of AI search, another (painful) metamorphosis is now taking place.
In the HTTP Archive data we can observe the influences working on SEO from a “nuts and bolts” point of view, report on what we see, and enable people to make up their own minds.
Specifically, one of the elements we added this year was the analysis of the llms.txt file.
This is a highly controversial text file, but our inclusion was not an endorsement. It’s a recognition that changing trends may (or may not) shape the web. Whether it’s effective or accepted, its adoption says something, and we felt it was important to review that.
Robots.txt as a bouncer
It’s clear that robots.txt has a more important job now than ever. Until relatively recently, it was largely used for targeted control of crawlers, particularly Googlebot and Bingbot.
For most SEOs, however, robots.txt was mostly an exercise in both ensuring we weren’t blocking anything by accident and resolving problem areas with Disallow rules. This has changed:
Gptbot: 4.5% on desktop and 4.2% on mobile in 2025 is up from 2.9% on desktop and 2.7% on mobile in 2024, representing a ~55% increase.
Ccbot: 3.5% on desktop and 3.2% on mobile in 2025 is up from 2.7% on desktop and 2.4% on mobile in 2024.
Petalbot: 4.0% on desktop and 4.4% on mobile in 2025 (not separately tracked in 2024).
Claudebot: 3.6% on desktop and 3.4% on mobile in 2025 is up from 1.9% on desktop and 1.6% on mobile in 2024, nearly doubling.
Robots.txt isn’t the only way to manage bots — and arguably isn’t the best — but it introduces a new decision that must be made: How should websites handle LLM crawlbots?
This will be one of the biggest areas we’ll see change in on the technical side of 2026:
Businesses with existing bot strategies will need to evolve them.
Businesses that don’t meaningfully manage crawlers will start feeling the pressure to do so.
Robots.txt will still be the clearest and easiest way to handle crawlers. We will almost certainly see more good and bad bots alike.
In 2026, SEOs will be drawn into bot management conversations spanning marketing, technology, and security. “Which bots should we allow?” is a question with downstream effects on budgets, revenue, and users, and we’ll need to closely monitor what develops.
LLMs.txt
LLMs.txt is an aspiring web standard that aims to guide LLM crawlbot behavior and make it easier for them to retrieve content before generating an answer. It’s a highly controversial .txt file, and there’s a vigorous debate on whether it actually benefits LLMs, will gain widespread use, and is a possible vector for manipulation.
The rationale or efficacy of this file isn’t something we need to cover here. For this article, the true point of interest with llms.txt is the adoption of this file as a statement of intent.
At the start of 2025, I crawled the Majestic Million, a regularly updated list of the top 1 million websites ranked by backlink authority, in search of llms.txt and found that adoption was extremely low (0.015% of sites, or just 15).
While searching one million sites versus 16 million presents some logistical differences, I was expecting a very low level of adoption based on prior experience. I was surprised at how wrong I was.
According to the 2025 data, just over 2% of sites had a valid llms.txt file, and:
39.6% of llms.txt files are related to All in One SEO (AIOSEO)
3.6% of llms.txt files are related to Yoast SEO
This number is still relatively low, but it’s much higher than I thought it would be and potentially represents a huge acceleration.
The primary reason fueling adoption of llms.txt’s SEO plugins that make this easier to enable.
We can see that llms.txt adoption has continued to rise ever since we started collecting data from across the web:
If, however, the implementation of this file is actually a default feature in some scenarios, it could be easy to overvalue its significance.
LLMs.txt will still be a barometer of AI search decision-making in 2026:
More tools and plugins will offer this functionality if they don’t already.
Yoast and Rank Math (which don’t default llms.txt to “on”) represent more growth opportunities for this file. Many SEOs may decide to switch it on even if there isn’t strong evidence of its efficacy.
The rate of adoption will continue to climb, but whether it’ll reach a point where it becomes an accepted best practice is harder to forecast.
FAQ growth
Another interesting trend worth discussing is the increase in the use of the FAQPage schema.
While this isn’t as explicit a trend as robots.txt or llms.txt usage, the increased adoption of this schema type is particularly interesting.
However, you can see from the last three publications of the Web Almanac that this isn’t the case:
The use of FAQPage schema is now an emerging trend as AI search heavily cites FAQ content in its outputs.
This could be correlation rather than causation, but the steady increase in FAQPage schema is a strong sign of AI search strategies changing the shape of the web.
To echo another conclusion from earlier, 2026 may well see continued growth of structured data types even if they don’t result in an obvious improvement. While the growth is unlikely to be explosive, making a case for their implementation is easier when we don’t just optimize for Google.
Will AI search reshape the web in 2026? Unlikely. Will we continue to see signs of its importance? Almost certainly, but let’s not get carried away.
SEO has a reputation for changing quickly. Sometimes that’s true. More often, it’s the conversation that moves quickly, while the web itself changes at a steadier pace.
The 2025 Web Almanac data clearly reflects that tension. Core SEO hygiene continues to improve year over year, but largely through default features and settings, tools, and platform behavior rather than deliberate optimization.
At the same time, long-deprecated standards linger, advanced configurations remain uneven, and the long tail of the web remains untidy. Progress is real, but it’s incremental — and sometimes accidental.
What has shifted meaningfully is intent.
Robots.txt is no longer just crawl housekeeping. It’s becoming a policy surface.
LLMs.txt, regardless of whether it proves useful, represents a new class of decision-making entirely.
FAQ patterns are on the rise again, and not because of SERP features, but because structured, extractable answers have immense value elsewhere.
2026 will not be remembered as the year SEO ended or was reborn. It may, however, be considered the year the AI search layer became more defined. A new patch applied — not a fundamental rewriting.
Most guidance on optimizing for AI still focuses on how content is written. But AI systems don’t read content the way humans do. These systems extract information, break it into parts, and reuse it in new contexts. What matters is whether your content can be pulled into an AI-sourced answer cleanly.
Where traditional SEO has centered on ranking pages, AI systems prioritize retrievable units of meaning. That changes how content needs to be built:
The 5 core principles of AI-preferred content design
When content is retrieved in pieces, used in generated answers, and selectively attributed, structure becomes the lever. These principles show up consistently in content that gets surfaced by AI systems:
1. Modular by design
Content is more useful when it’s built in discrete units. Each section should:
Address a specific question or subtopic.
Be understandable without relying on surrounding text.
Long sections that depend on earlier context are harder to reuse in isolation. Modular structure also makes content easier to update, test, and repurpose across surfaces — without rewriting the entire page.
2. Hierarchically structured
A clear hierarchy helps systems understand what each section contains and how it relates to the rest of the page. H2 → H3 → H4 structure should signal:
Topic: What the section is about.
Intent: What question it answers.
Scope: How narrow or specific it is.
Headings should make each section’s purpose immediately clear. When that signal is weak, it becomes harder to match the right section to the right query.
3. Explicit over implied
AI systems rely on what’s stated directly. Make relationships and conclusions clear by:
Defining terms when they’re introduced.
Stating outcomes or takeaways directly.
Clarify cause-and-effect or comparisons, rather than implying them.
If something is important, it should be written plainly. Copy that requires inference is harder to interpret and more likely to be skipped in favor of clearer alternatives.
4. Answer-first formatting
Place the direct answer to the section’s core question at the top, then expand.
AI systems prioritize passages that resolve a query immediately. When the answer is delayed or embedded within a longer explanation, the relevance of that passage becomes less obvious.
The rest of the section can then add deeper nuance, examples, or other details that further understanding without changing the core response.
5. Designed for passage-level extraction
Passages compete for selection, both within the same article and across the web.
When multiple sections address the same question in similar ways, they dilute each other. Clear, specific, and well-scoped content “chunks” are more likely to be selected.
You can audit a passage’s usefulness by asking:
Is it understandable without additional context?
Does it fully answer a single question?
Can it be quoted as an answer without any editing?
If the passage needs context or cleanup, it’s less competitive.
Common content patterns that improve AI retrieval and use
These patterns show how structured, answer-first content is applied in practice — making it easier for AI systems to match, extract, and use.
The ‘definition + expansion’ block pattern
Start with a clear definition. Then add detail. This works best for:
Concepts.
Terminology.
Processes.
The definition should establish what something is in a way that can be quoted independently. The expansion then adds context, nuance, or examples.
This pattern helps position your content as a reference point for core concepts — especially when AI systems need a clean, authoritative definition.
The ‘question → direct answer → context’ pattern
AI systems are designed to respond to queries. This pattern aligns your content to that structure.
Order your content as:
Question.
Immediate answer.
Supporting detail.
The answer should resolve the query in one to two sentences, using the same language or phrasing as the question where possible.
Remaining content can add depth through nuance and edge cases that extend beyond the core answer.
The ‘framed list’ pattern
Lists work best when they’re introduced by a clear framing sentence that tells the reader — and the retrieval system — what the items represent.
Follow a consistent structure (e.g., all actions, all criteria, all features)
Stay at the same level of detail
Clearly map back to the framing sentence
This pattern works especially well for steps, criteria, features, and takeaways.
Well-structured lists are easier for systems to parse and reuse, especially when each item is clearly defined within the context of the list.
The ‘comparison’ pattern
Structure content to make differences explicit. This works well for alternatives (“X vs Y”), tradeoffs, and decision-making criteria. You can use:
Side-by-side comparisons.
Clear evaluation criteria (price, features, use case, limitations).
Direct statements of when to choose each option.
Content that clearly outlines differences is easier for AI systems to extract and reuse in answers that involve evaluation or recommendations.
Top content design mistakes that limit AI visibility
Most AI surfacing issues come back to content structure. When structure is weak, answers are harder to identify and extract. That tends to show up in the form of:
Overly narrative, under-structured content
Long paragraphs with key points buried inside make it harder to isolate a clear answer. Without strong subheadings to define what each section covers, systems have fewer signals to identify where that answer lives.
Ask:
Does this section answer a clear question, or just explore a topic?
Is the main point easy to identify in the first few lines?
Do the subheadings clearly signal what each section contains?
Vague or non-descriptive headers
Headers like “Overview,” “Introduction,” or “Key Takeaways” don’t provide enough signal about what the section actually contains.
Headings help systems understand what a section covers and how it relates to a query. When they’re vague, the relationship between section and query becomes less explicit.
Ask:
Would this header make sense out of context?
Does it clearly reflect the question or topic being answered?
Could multiple sections on the page use the same header?
Answers buried mid-paragraph
When the answer appears halfway through a paragraph, it’s harder to isolate as a clean, reusable unit.
AI systems look for segments that clearly resolve a query. When the answer is embedded within surrounding context, it becomes less distinct and more likely to be overlooked or reassembled.
Ask:
Is the answer clearly distinguishable from the neighboring text?
Does contextual copy clarify or dilute the answer’s main point?
Redundant or repetitive sections
When sections overlap, they compete for the same query and weaken the overall signal. Instead of reinforcing the topic, similar sections can fragment it across multiple passages, making it less clear which one should be selected.
Ask:
Do multiple sections answer the same question in slightly different ways?
Is each section clearly scoped to a distinct angle or subtopic?
Clear separation improves both retrieval and selection.
How to evolve existing content for AI without starting over
Most teams don’t need to totally rebuild content from scratch. Updating existing content for today’s landscape just requires a few structural changes.
Break content into logical units
Identify where natural sections exist and what question each one answers.
Split broad or mixed sections so each one resolves a single idea or query.
If a section covers multiple points, separate them into distinct sections.
Rewrite for answer-first clarity
Move the clearest version of the answer to the top of each section.
Remove lead-in language, qualifiers, or examples that appear before the answer.
Ensure the opening lines can be understood without relying on the rest of the page.
Strengthen structural signals
Make headings specific enough to reflect both the topic and the question being answered.
Use formatting (lists, short paragraphs, summaries) to make key points easier to scan and isolate.
Check that each section’s purpose is immediately clear from its heading and first sentence.
Introduce distinct framing
Turn generic sections into clearly defined units, like:
Ensure each section covers a distinct angle and does not repeat or overlap with others. This helps consolidate signal and makes it easier for systems to select and attribute the right passage.
The future of content design in AI-mediated search
AI systems are already reshaping how content is surfaced, and that shift will continue as answers become more personalized and draw from multiple sources.
As a result, page-level ranking matters less on its own. Content value is shifting toward contribution — how clearly a piece of content can inform, support, or shape an answer.
The content that performs best will be:
Structurally clear, with sections that are easy to identify and extract.
Modular, so individual passages can be selected and reused independently.
Distinct, with clearly defined ideas that don’t overlap or compete internally.
Designed to be selected and used, not just indexed or ranked.
Content that meets these criteria is more likely to be surfaced, reused, and attributed as AI-mediated search continues to evolve.
For a long time, links were the primary signal of authority in search. If you wanted visibility, you built backlinks. If you wanted credibility, you earned placements. That still matters — but it’s no longer enough.
In AI-driven search, authority is shaped by how often your brand is mentioned, cited, and clearly associated with a topic. Visibility comes from being referenced in AI-generated answers.
With that shift in mind, the goal is to create content that earns consistent brand mentions and citations — the signals that now drive AEO visibility.
The philosophy driving content that fuels AEO growth
In 2026 organic discovery, authority incorporates entity recognition.
On both Google and LLMs like ChatGPT and AI Overviews, authority is reinforced through:
High-quality backlinks.
Brand mentions (linked or unlinked).
Consistent citations across trusted publications.
Clear entity associations (who you are, what you’re known for, and what topics you “own”).
Since LLMs synthesize information instead of ranking pages, you need repeatable, credible mentions across the web to strengthen your brand’s likelihood of being cited or referenced in AI answers. Importantly, you also need to use your owned media to define your brand entity very clearly.
That makes building authority even more critical. Your content will now be battling with even more competition in the form of AI results in the SERP and AI-produced content from other publishers.
The TL;DR is that you need to establish a clear brand and, underneath that brand, create content that’s so valuable that other experts, journalists, creators, and AI systems repeatedly reference your brand when they’re discussing a topic core to your business.
The principles and formatting of AEO-friendly content
You’ll use many of the same SEO principles as a base for AEO-friendly content. Content aligned with Google’s helpful content guidelines — focused on value and user experience — appeals to the people (and LLMs) discussing these concepts and sourcing experts to validate their positions.
That said, to produce truly AEO-friendly content, you need to incorporate formatting that supports LLM extraction.
Key formatting principles include:
Clear definitions: Have short, clean definitions high on the page:
“X is…”
“Y refers to…”
Structured formatting:
Use descriptive H2s and H3s.
Employ bullet points.
Keep paragraphs short.
Include direct answers under question-based headers.
Explicit context:
Avoid vague pronouns and implied references.
Remember that LLMs perform better when context is explicit and self-contained.
The specific objectives for your AEO content to address
If you’re solely focused on AEO, I’d approach your content with these objectives in mind:
Be highly citable: Include original data or perspectives a journalist or influencer would use in media like podcasts, expert roundups, contributor columns, or co-marketing content)
Be highly quotable: Provide at least one clean, quotable insight.
Be specific: Answer specific questions an AI system would try to answer. You can clearly articulate a question your content answers — and answer it verbatim with a section or paragraph in your content.
Be clear: Define a topic in an easily extracted manner.
To address these objectives, it can be helpful to think beyond blog posts to ideate “reference-grade” assets, including:
Practical steps to build AEO authority with content
Here’s how to turn those principles into a repeatable process for building AEO authority:
Research keywords where bloggers and journalists are searching for references (these keywords often include “statistics” or “reports”). Use Reddit, Quora, X, Ahrefs (Matching terms report), and Exploding Topics among your references.
From those keywords, build a list of topics around which your team has the expertise to share valuable insights and perspectives.
Research a list of writers and journalists who cover those topics.
Find expert resources (either internal or closely connected) and interview them to build a cache of content.
Refine and develop that content into contemporary insights using Google Trends and social listening, using timing and a list of audience modifiers to heighten relevance.
Example: Get a list of tips from an expert targeted to help hay fever sufferers (niche audience/modifier) get a better night’s sleep (core topic/target) during a particularly bad high pollen count period (relevance).
Pitch a group of writers and journalists who cover your theme and/or sub-theme on why this matters right now, and how it’s different from other content they might find to reference.
If (or even before) those writers and journalists link to your content, follow them on their social channels to deepen your connection for future opportunities.
Writing for AEO isn’t at odds with writing for humans. Even from its early days, AEO shared many of the SEO fundamentals derived from appeal to actual users.
That said, there are enough differences with the way LLMs extract and digest content (and the way users ask LLMs for information) that you need to keep specific nuances in mind in your content approach.
With a clearly defined brand on your owned media, and an understanding of the tenets of AEO and how to address them, you should have a good idea how to leverage your team’s expertise for greater visibility on the AI search landscape.
Since 2021, I’ve worked on more than 350 published guest posts. In that time, I’ve refined a repeatable guest posting outreach process that consistently drives approvals without ever paying for a placement.
Although guest blogging is becoming more difficult, the basics of personalized guest posting outreach remain the same. If your mindset is to create mutual value, this process will work for you in 2026 and beyond.
Step 1: Build your outreach list
Your outreach list is a collection of the websites you’ll email to offer guest-written content. You can build your list in several ways.
The easiest way to find potential websites is by googling your niche alongside “write for us.”
Plenty of reputable websites openly accept guest posts and have an established approval process you can find online. That’s the exact approach I used to publish an article on G2’s Learning Hub.
Alternatively, search the name of a prominent person in your niche and add keywords such as “guest post,” “guest author,” or similar. Chances are that if a website has published guest posts from someone in your industry, they’ll be receptive to accepting guest posts from you as well.
Browse your competitors’ backlink profile with an SEO tool. In Semrush, Backlinks is one of the SEO tools under Link Building.
To refine your list, verify which websites have previously published content from guest authors. If, however, all articles on a blog are written in-house and you’re not the Beyoncé of your industry, chances are your guest posting pitch will go unnoticed.
Once you’ve gathered a list of sites that potentially accept guest posts, run them by your website quality criteria.
Consider the website niche, top pages, organic traffic over time, countries where the traffic is coming from, authority score, and outgoing backlinks. You can also automate this step with the API of your favorite SEO tool.
Even the best guest post outreach will fail if you’re writing to the wrong person.
Most people ignore emails that aren’t relevant to them, nor do they forward them to the right colleague.
That’s why you need to do your homework. There’s likely a specific department or person you should be addressing.
Here’s how to find the right person through LinkedIn:
Open the company LinkedIn profile and select the People tab.
Type relevant keywords into the search bar to filter out profiles. You’re looking for a person who decides what content goes on the blog.
To do this, you can type “content” and browse the results for a content manager, content editor, or similar.
In smaller companies, you can search for “marketing” or “growth” to find who’s the one-person marketing team.
For micro companies, your best contact person might be the founder or co-founder.
Use Apollo or Hunter to find the work email of the best contact you find.
Sometimes, you’ll come across companies that have no listed employees on LinkedIn, or their emails are not available. In this case, your only option might be a generic email such as contact@ or support@. For micro companies or in certain niches (typically B2C websites), these emails can still work.
Verify all email addresses. Many outreach tools have built-in email verification features.
This step helps you protect your sender reputation and ensures your emails end up in the inbox, not the spam folder.
Step 3: Choose your outreach approach
There are two distinct ways to approach guest posting outreach.
Send out a generic email template with basic personalization
Ask whether the website accepts guest-written content. This way, you don’t invest a lot of time upfront into every pitch and your only focus is on building an outreach list.
As the emails aren’t highly personalized (they usually just include the names of the person and the company), they generate a moderate reply rate.
To drive results with this approach, you need a large outreach list so you’ll still get enough opportunities to work with at a 3% to 5% reply rate.
Hyper-personalize your emails
The email you send to company A offers something completely different than the email you’re sending to company B. It takes a lot of time to research and tailor your pitch, but it also enjoys a higher reply rate (around 19%, from my experience).
This approach works best when you have a small outreach list or when you’re pitching to prominent websites.
Step 4: Research the right topics
No matter your outreach approach, you usually need to pitch guest post topics. With basic personalization, you suggest topics only to the websites that reply to you. But with the hyper-personalized email approach, you propose topics in the first email you send.
Top-tier websites typically only accept specific types of guest articles. Find the website’s editorial guidelines by googling “[company name] + guest post” and see their requirements.
Let’s look at HubSpot as an example. They’re only publishing marketing experiments, original data analyses, or super detailed tactical guides.
Similarly, writing a guest article for Zapier’s blog requires specific experience. Generic topics won’t make the cut.
Buffer takes things a step further by opening rounds for guest posting under specific themes.
Following each website’s requirements increases your chances of landing a successful pitch. But most websites are open to a broader range of suggestions.
Some editors have a list of keywords or topics they want to target. They may share it with you so you can choose a topic to write on based on your expertise.
Alternatively, you can bring your own guest post ideas. When that’s the case, you can use a keyword gap analysis to uncover relevant topic ideas.
How to do a keyword gap analysis with Semrush
Let’s say you want to pitch a guest article to monday.com. Here’s how to go about it:
Go to Semrush’s SEO tools and select Keyword Gap. Add the URLs of Monday.com’s blog along with the blogs of leading competitor brands, then click on Compare.
Next, filter out the keywords.
Look only at keywords where competitors are ranking in the top 100 results.
Limit the keyword search volume to 2,000. This filters out broad, highly competitive terms that typically require long-form, comprehensive guides to rank.
In the keywords report, choose Missing to see keywords that competitors are ranking for but monday.com isn’t. This is their keyword gap.
Look deeper into individual keywords that seem interesting and match your expertise.
For example, “what is time boxing” has 49% keyword difficulty.
In the search bar, add the domain URL to get a personalized keyword difficulty calculation. The goal is to find keywords for which your article has real potential to rank.
After selecting “monday.com,” you see the site has low topical authority for “what is time boxing,” and ranking for it would be very hard.
Looking at “cost management in project management,” the Personal Keyword Difficulty is 60%. While that’s still high, there’s more to consider.
Check how your target domain compares against other websites ranking for this keyword.
Monday.com’s Authority Score (AS) is 67, while the average in the top 10 is AS 52. Despite this being a competitive keyword, with the right content, monday.com has real ranking potential.
Double-check the website isn’t targeting this keyword already. Sometimes, the website already has content on a similar topic — they’re just targeting a variation of your keyword.
To do this, use the “site:” search operator and add your keyword into Google search.
In this case, “task priority” came up in the keyword gap analysis. While monday.com doesn’t have an article with this keyword in the H1, it does have very similar content on how to create a priority list or prioritize tasks.
Select three to four keywords that would make sense for the website to target. This ensures that the website editors will have enough options to choose from. If you put all of your eggs into one topic idea, it might not land. But three or four ideas increase your chances of success.
Adding extra value is about what else you can bring to the table besides guest content.
Are you an established author in the site’s niche?
Do you have a social media following that would be interested in this piece?
Are you running a relevant newsletter?
Or do you participate in a private community that cares about this topic?
Your extra value proposition is unique to your profile, and different value props can appeal to different websites.
For example, I have 11,000 followers on LinkedIn. When reaching out to a project management tool’s blog editor, I can mention that 54% of my followers are founders, executives, or senior-level professionals in small to mid-sized companies — the very people responsible for managing processes and tools within their organizations.
If I’m personalizing this pitch for a lead-generation blog, I can highlight that 35% of my audience works in the marketing or advertising industry.
Step 6: Prepare your emails
When it comes to your emails, you need to consider the subject line, the email body, and follow-ups.
Mention the website name (but not the person’s name).
Use title case (vs. sentence case).
On to the email body: Keep your emails concise and skimmable. Editors rarely have time for long messages.
Finally: follow-ups. Statistically, the more you follow up, the higher your overall campaign reply rate. Some people reply after the first follow-up, others after the third.
My recommendation? Limit follow-ups to two. A third one feels too pushy.
Step 7: Send your outreach emails
You’ve done a lot of preparation work. It’s finally time to send your emails. Here’s what to consider:
Send days
An analysis of 85,000 personalized emails showed the best day to send a cold email is Monday, closely followed by Tuesday and Wednesday. These are the days with the highest email open and reply rates.
Send times
The same study suggests you should be sending your emails between 6 to 9 a.m. PT (9 a.m. to 12 p.m. ET). But since most editors are based in different countries, aim to send your email before noon in their local time.
Unsubscribe option
Always give recipients a clear way to opt out of more emails. Without an unsubscribe option, recipients may mark your message as spam. This can damage your sender reputation and reduce future deliverability.
Step 8: Track and adjust
Most outreach tools allow you to track open, reply, and success rates. Let’s break down what each metric tells you.
Open rate is the percentage of recipients who open your email. Your subject line, preview text, sender name, and domain reputation directly influence this number.
Reply rate is the percentage of recipients who respond to your email. Exclude automatic replies (like out-of-office messages) to avoid inflated performance numbers. Your email body, topic relevance, and positioning drive this metric.
Success rate is the percentage of sent emails that result in a published guest post. Your topic selection, communication with the editor, and adherence to editorial guidelines are some of the aspects that influence success rates.
Track these metrics to identify weak points in your outreach campaigns.
After you establish a baseline, run controlled A/B tests. Send different versions of your campaign to similarly sized groups and compare performance. Change only one variable at a time so you can clearly measure its impact.
Test ideas such as:
Subject line with an emoji vs. without.
First email with an extra value proposition vs. without.
Three suggested topics vs. four.
One follow-up vs. two follow-ups.
Small improvements across different elements of your campaign can compound into measurable gains in success rate.
Step 9: Build relationships with editors
I mentioned I’ve worked on more than 350 guest articles. But that doesn’t mean they were all published on different websites. When you provide quality, you’re very likely to build lasting relationships that result in ongoing work.
That’s one reason I use keyword gap analysis to choose topics. I target keywords that the website has real potential to rank for. When an article brings meaningful traffic, it becomes much easier to pitch the next one.
To establish lasting relationships with editors:
Provide exceptional content: Structure the article around search intent. Create original value with custom visuals, expert quotes, and practical examples. Support the publisher’s internal linking by adding multiple links to other resources on their website. Ensure perfect grammar and spelling.
Support the article after publication: Promote it through your social media, newsletter, or community. When appropriate, link to it from other relevant content you write.
Be reliable and easy to work with: Communicate clearly, respect editorial guidelines, and meet every deadline.
My guest posting template with 18% success rate
Below is the guest post outreach template that has delivered the strongest results in my campaigns.
Between 2023 and 2025, I sent more than 300 pitches using variations of this template, primarily to content managers at B2B SaaS companies in the marketing and HR niches. It generated a 19% reply rate, and 18% of sent emails resulted in a published guest post.
Subject: Fresh content ideas for [Company Name]
Hi [First Name],
My name is [Your Name], and I’m the [Your Job Title] at [Your Company], a [short company description].
I’m reaching out to see if [Company Name] is open to guest contributions. I have extensive experience in [your expertise area], having worked on projects for brands such as [Brand 1] and [Brand 2].
Here are a few topic ideas I’d love to propose:
keyword: [primary keyword 1], US search volume: [search volume]
[Proposed Article Title 1]
keyword: [primary keyword 2], US search volume: [search volume]
[Proposed Article Title 2]
keyword: [primary keyword 3], US search volume: [search volume]
[Proposed Article Title 3]
To learn more about my background, you can view my [LinkedIn profile link] or review articles I’ve written for [Publication 1], [Publication 2], and [Publication 3].
If the article is a fit and gets published, I’d be happy to promote it to my community of [audience description or size].
Your author profile directly influences your approval rate.
If you’re just starting out and don’t have a portfolio of published work, editors will hesitate to approve your topics. Start by reaching out to small or mid-sized industry blogs.
As you build your portfolio, pitching becomes easier. Publishing on recognized industry websites and creating content that drives measurable results strengthens your credibility and improves your success rate over time.
Bottom line: Invest in your author profile. That’s your biggest asset for successful guest blogging.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
About the Role WebFX is seeking an entry-level candidate for our Marketing team! Our ideal candidate has a bachelor’s degree (or will soon have one!), a track record of strong academics, and is excited about all things marketing and client relationship-building. Related experience is awesome to have, but never required – we’ll train you on […]
Position Summary: The Senior Content & Growth Strategy Manager plays a critical role at the intersection of Marketing, Digital Engagement, and Commercial Growth. This position is responsible for translating consumer intent, market dynamics, and brand objectives into a coherent content ecosystem that drives measurable business impact. Acting as a strategic bridge between marketing, product, and […]
Job Description Ushur delivers the world’s first Customer Experience Automation platform purpose-built for regulated industries. Designed to enable seamless self-service, Ushur infuses intelligence into digital experiences to create more delightful and impactful customer interactions. Backed by robust compliance-ready infrastructure and enterprise-grade guardrails, Ushur powers vertical AI Agents tailored for healthcare, financial services, and insurance. With […]
Job Description Digital Marketing Specialist Salt Lake City, UT | Hybrid | $70,000 / year + discretionary bonus About the Role We are a fast-growing company looking for a driven, well-rounded, full-time Digital Marketing Specialist to join our expanding team. This is an exciting opportunity for a self-starter who thrives in a dynamic environment, embraces […]
Job Description Healthcare is increasingly unaffordable for many Americans. For those who can afford it, they are in a health insurance system that has become more confusing, restrictive, and lower value with each passing year. Here at WeShare our mission is to bring better healthcare to America at a better price. We offer consumers a […]
Job Description Salary: 60K – 70K What You’ll Do This is a strategic mid-level digital marketing role focused on driving measurable growth through multi-faceted campaign management. You will own the full lifecycle of multi-platform selfservice digital campaigns, from strategic planning and execution to optimization and performance analysis. This role requires a data-driven professional who can […]
Job Description Position: Digital Marketing Specialist Location: Schaumburg, IL Years of Experience: 3-5 Years About RTM: RTM Engineering Consultants is an MEP, Civil and Structural engineering firm that goes beyond the conventional consulting role. We forge deep partnerships with our clients by aligning with the goals, processes and people at each organization. By integrating our […]
Job Description Digital Marketing Specialist Location: Remote (United States) Employment Type: Full Time | Exempt Reports To: Director of Marketing Technologies Work Authorization: Must be authorized to work in the U.S. without sponsorship Role Summary The Digital Marketing Specialist supports the execution of digital content across Catalyst Acoustics Group’s portfolio of brands. This role will […]
About Us Would you like to be part of a fast-growing team that believes no one should have to succumb to viral-mediated cancers? Naveris, a commercial stage, precision oncology diagnostics company with facilities in Boston, MA and Durham, NC, is looking for a Digital Marketing Associate team member to help us advance our mission of […]
Position Summary: The Digital Marketing Specialist leads the evolution of digital marketing, early adoption and integration of AI-enabled marketing at Scot Forge, shaping how the company attracts, engages and converts customers and candidates in a rapidly evolving digital landscape. This person plays a key role in driving demand generation and is responsible for managing our […]
Description: Balance Health, a national leader in podiatry, is seeking a dynamic and analytical Paid Media Manager to orchestrate paid digital marketing. The Paid Media Manager will be responsible for driving new patient volume across paid channels from Google Ads (Search, Pmax, Demand Gen, etc.) to Meta Ads (Facebook, Instagram) to paid marketplaces (ex. ZocDoc) […]
Job Description Benefits: 401(k) Bonus based on performance Competitive salary Dental insurance Free food & snacks Health insurance Opportunity for advancement Performance Marketing Specialist Irvine, CA Working Capital Marketplace (WCMP) About WCMP Working Capital Marketplace is a fast-growing financial services company focused on helping small business owners access the capital they need to scale. We […]
Description: Paylocity is an award-winning provider of cloud-based HR and payroll software solutions, offering the most complete platform for the modern workforce. The company has become one of the fastest-growing HCM software providers worldwide by offering an intuitive, easy-to-use product suite that helps businesses automate and streamline HR and payroll processes, attract and retain talent, […]
Whizz is an innovative company offering rental, lease‑to‑own, and subscription models for electric bicycles for couriers and last‑mile delivery services. The company strives to increase access to mobility for all participants in delivery platforms, optimise transportation time and costs, and create reliable, eco‑friendly solutions for urban logistics. Whizz continues to expand its presence in key […]
Your Role in Helping Us Shape the Future U.S. News & World Report is a multifaceted digital media company dedicated to helping citizens, consumers, business leaders and policy officials make important decisions in their lives. We publish independent reporting, rankings, data journalism and advice that has earned the trust of our readers and users for […]
Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.
Google is fixing a long-running Search Console bug that inflated impression counts. As the fix rolls out, reported impressions will decrease.
What happened. A logging error caused Google Search Console to over-report impressions starting May 13, 2025. Google today updated its Data anomalies in Search Console page:
“A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”
A Google spokesperson told Search Engine Land:
“We identified a reporting error in Search Console that temporarily led to an over-reporting of impressions from May 13, 2025 onward. Bug fixes are being implemented to ensure accurate reporting.”
What’s changing. Google is deploying fixes that will change how impressions are recorded and reported. As the rollout continues, you’ll likely see a drop in impressions in the Performance report. Clicks and other metrics aren’t affected.
The timeline. The issue began May 13, 2025 and persisted until now. Google said the correction will take several weeks to fully roll out across reporting.
Why we care. If your Google Search Console impressions change in the coming weeks, it will likely be due to this bug fix.
Customer journeys are collapsing into a single moment of evaluation. David Edelman recently described this shift as the convergence of behaviors that used to happen separately.
As decisions compress, brands need to be clearer about what they are trying to solve for the customer. Many organizations are increasing activity instead, without sharpening the underlying strategy.
His central insight is that generative AI has snapped these four behaviors together so tightly that the old model — awareness, then consideration, then purchase, each in its own tidy lane — no longer describes reality. Consumers bounce between platforms, multitask, and shift fluidly between entertainment and intent.
The data point that stopped me cold: people are now asking AI-enabled search engines much longer, richer, more emotionally descriptive queries. Not keywords. Paragraphs. They share context, constraints, preferences, and urgency.
The AI then breaks those queries into multiple search streams and synthesizes results in real time. What once required dozens of browser tabs — hours of work — now takes seconds.
Edelman draws two implications from this.
The fundamental unit of competition has changed. Brands are now evaluated as solutions to specific situations, not as products within a category.
The familiar demand framework — create demand, capture demand, and convert demand — must be treated as simultaneous, not sequential. You can’t do them in order anymore because the journey doesn’t proceed in order.
Walt Kelly gave us Pogo, the philosophical possum of Okefenokee Swamp, whose most celebrated utterance was the 1970 Earth Day poster declaration: “We have met the enemy, and he is us.”
Kelly’s most persistent target was not any external villain, but the human tendency to mistake activity for progress. His characters were always busy — scheming, planning, campaigning, reorganizing — and almost never clear on why.
Another line often attributed to him captures it just as well: “Having lost sight of our objectives, we redoubled our efforts.”
Read Edelman’s argument through that lens, and the pattern becomes harder to ignore. He describes brands racing to keep up with compressed customer journeys — more content, more specificity, more “answer audits,” more presence across platforms and formats. The advice is sound.
But without clarity about what a brand is actually trying to solve for the customer, more content and more channels are just Pogo’s swamp creatures running faster through the same mud.
The compression trap: When speed substitutes for clarity
Edelman is right that the journey is compressing. But compression can serve two different masters.
For brands with crystal-clear positioning — brands that genuinely know what problem they solve and for whom — compression is a gift. It helps a consumer build confidence faster.
Warby Parker, which Edelman cites approvingly, is a clean example: its home try-on program, transparent pricing, and frictionless returns all express a single, coherent answer to a specific question: “Can I trust buying glasses without trying them in a store?” Every element of that brand experience is aimed at one objective.
For brands that lack that clarity — brands that have accumulated messaging layers over years of campaign-by-campaign marketing — compression is a disaster. The consumer’s AI-enabled query now synthesizes everything a brand has ever said across every channel, every format, every platform.
If those signals are inconsistent, contradictory, or simply incoherent, the synthesized answer will be a muddle. The consumer will move on. In Pogo’s swamp, the creature that runs fastest without knowing where it’s going simply reaches the wrong destination sooner.
Edelman gestures at this when he writes that brand should be understood as “the sum of signals that make a company recognizable as a solution.”
He’s right. But I’d push harder: the compression of the customer journey isn’t primarily a technological problem. It’s an objectives problem.
Most brands can’t clearly articulate, in a single sentence, what specific situation they are the best answer to. If you can’t say it plainly, AI certainly can’t infer it.
Pogo would recognize the funnel debate immediately
One of Edelman’s shrewder observations is that some of his clients have constructed a “false trade-off between brand and performance.”
Marketing departments argue over budget allocations between brand-building and demand generation as though they are fundamentally separate activities. This is, as Kelly’s characters would say, a very impressive argument that completely misses the point.
Kelly spent years satirizing exactly this kind of internal organizational warfare — committees forming to study committees, campaigns launched to counteract the confusion caused by previous campaigns.
Organizations are often earnest and busy, and just as often distracted by their own processes. The brand-versus-performance debate is the marketing equivalent of explaining why two teams can’t collaborate because their mandates are structured differently.
In a compressed journey, brand is performance.
The clarity of a brand’s positioning determines whether it surfaces as the right answer to a specific query.
The quality of its content determines whether it captures demand at the moment of confidence.
These are the same thing viewed from two angles.
The brands winning in Edelman’s compressed journey world — Nike, Glossier, IKEA, Warby Parker — don’t appear to be having this argument internally. They have simply decided what problem they solve and built everything around that answer.
Edelman recommends something he calls a “recurring answer audit”: examine what a consumer would actually encounter across social discovery, video search, retail listings, and AI assistants for their most common customer scenarios. Gaps and inconsistencies, he says, quickly become visible.
This is excellent advice. It’s also, if I’m being blunt in the spirit of Kelly, only half the medicine. An audit shows you where your signals are inconsistent. It doesn’t tell you what they should be consistent about.
You can audit your way to a perfectly coherent set of messages that still fail to answer any real consumer question, because the messages were never designed around actual consumer situations in the first place.
You need to audit your objectives. What, precisely, is your brand the solution to? Not the product category. Not the feature set. The actual situation.
The specific tension in a person’s life that this brand, and not a competitor, is best positioned to resolve. Until that question is answered with unambiguous clarity, the answer audit is tidying the swamp without draining it.
None of this is meant to diminish what Edelman has written. On the contrary, his framework for thinking about the compressed journey is the most coherent I’ve seen in years.
Three of his observations deserve to be tattooed somewhere visible on the forearms, wrists, hands, necks, and behind the ears of every marketing professional.
That’s not just a description of a media landscape. It’s a theory of consumer psychology. Confidence is the triggering condition for a purchase. If you’re optimizing for impressions without asking whether those impressions build confidence, then you’re very busy going nowhere.
Brands must shift from ‘product language’ to ‘solution language.’
This sounds simple and is, in practice, revolutionary. The default mode of most brand organizations is to lead with what they make.
Edelman says lead with the situation you resolve. That is a fundamental reorientation of how marketing is conceived and executed.
‘Are you the customer’s solution? Will they know it?’
Two questions. The first is a strategy question. The second is an execution question. Most marketing fails by answering the second question without having honestly answered the first.
Kelly’s Pogo ran for 25 years, and the swamp never did drain. The characters were charming, the satire was sharp, and the folly continued because the creatures were incapable of distinguishing between effort and progress. Kelly found that funny.
Marketing history, filled with elaborate, energetic, and expensive campaigns from brands that no longer exist, is less amusing.
Edelman has given us a useful map of the compressed customer journey. It’s fast, complex, AI-mediated, and it rewards clarity above all else. What he understates — though it runs beneath the surface of his argument — is that compression is also a reckoning.
Brands built on accumulated momentum, legacy awareness, and category inertia will find that a faster journey exposes their vagueness more brutally than a slower one ever did.
The compressed customer journey demands better thinking. And better thinking, as Pogo understood, begins with recognizing that the problem isn’t out there in the swamp. It’s in here — in the planning meeting, the brand brief, the objectives slide that everyone in the room suspects isn’t quite right, but no one challenges.
With apologies to Pogo, “We have met the enemy of the compressed customer journey. And it’s our inability to clearly say what we are actually for.”
Over the course of my three-decade career, the keyword drove paid search. Today, it’s one of many signals. Strategy is what determines performance.
Keywords were what you researched for weeks, then built your strategy around based on what you uncovered or hypothesized. You managed everything from bids to matched search terms to negatives and the audiences you targeted. Your career was built and measured by how well you structured around a keyword.
Paid media has always been deeply tactical, with Google driving the majority of search. You were methodical about placements, audiences, bids, headlines, extensions, and keyword-stuffed URLs.
This model worked. It gave practitioners the control they needed to get results.
You could see which search queries triggered ads and what they cost. If there was value, you expanded or doubled down. You might over-segment ad groups by theme or build campaigns around keyword audiences, then layer in modifiers and match types to drive 1200% ROAS.
What changed across platforms
Advertising has converged on a single structural shift: AI, or more precisely, automation built into the platforms. These systems now handle targeting, bids, and creative assembly that practitioners used to manage manually.
The keyword hasn’t disappeared. It’s moved from the primary optimization lever to one signal among many that platforms use to deliver ads based on user behavior and the auction.
On Google, AI Max for Search is the clearest example. It’s not a new campaign type. It’s an optimization layer, similar to Smart Bidding, that changes how keywords function inside a search campaign. Google’s AI uses your existing keywords, copy, and landing pages, including H1s and H2s, as signals rather than instructions to find and serve ads.
Google reports that advertisers using AI Max see 14% more conversions at a similar CPA or ROAS, with campaigns using exact and phrase match seeing lifts of up to 27%. Pair it with Performance Max across Search, Shopping, YouTube, Display, Discover, Gmail, and Maps, or Demand Gen for upper-funnel awareness, and the system expands further.
When I say strategy is the new keyword, I’m not speaking in abstractions. I’m saying there are specific inputs that now determine where your ads show up, who sees them, and whether they convert. These inputs have largely replaced the keyword list in paid media as the highest-leverage control.
The distinction matters. Strategy dictates the activity needed to achieve your goal and vision. Tactics are the execution. What’s shifted is that platforms now handle the tactics, and our job is to define the strategy that guides them.
Conversion data quality, including server-side tracking, has become the most important input in any account. Google’s Smart Bidding and other platform optimization systems depend on conversion or event signals to learn and improve.
You can prioritize from all to one, which conversions matter more, whether it’s a lead from a high-value market versus a newsletter sign-up, or a new customer versus a returning one. These distinctions used to be handled through keyword segmentation and bid modifiers. Now, in a small way, they’re handled through strategic conversation, where value is assigned or determined at that point.
First-party data, customer lists, CRM data, website behavior, and offline imports have become the equivalent of keyword research. The richer and cleaner the data you feed these systems, the better they perform. It’s less about search volume and more about understanding your own customer data, making sure it’s structured properly, and connected to the platforms you advertise in.
Creative is a beast. It’s moving from a production deliverable to a strategic signal.
For Demand Gen, Display, and Meta, your creative, functionally speaking, is your targeting. Platforms read your images, video, and copy to determine who sees your ads. Google AI Max generates headline and description variations based on your landing page content, your H1s, H2s, and so on.
The strategic questions, what themes resonate with which segments, what visual approaches drive action at different funnel stages, and what messaging frameworks allow AI to generate variations, now carry the weight the keyword used to.
Landing page and website quality have become paid media inputs, not just a thing for UX or CRO. AI Max reads your page to determine what queries to match and which headlines to generate. Final URL expansion in AI Max and Performance Max sends users to the page AI deems most relevant. Poor post-click experiences, thin content, and slow load times can tie back to lower conversion rates.
All of this limits AI’s ability to serve your ads.
The most valuable work is no longer managing keyword lists or adjusting manual bids. I have strong opinions on that, but I’ll ask you, what else could you be doing with your time, instead of manually adjusting bids for thousands of keywords?
It’s the strategic framework that AI systems operate within: ensuring data quality, defining creative strategy, building measurement into your teams, and knowing when the LLM is wrong and you, as an SME, need to adjust course.
The job of subject-matter experts is to guide the machines. That guidance takes the form of conversion architecture, audience signal quality, creative frameworks, and brand guardrails, rather than keyword lists and bid sheets.
This means investing time in understanding how:
These systems work.
Platforms learn.
LLMs prioritize.
It’s the pros and cons we choose to emphasize — the signals we prioritize. It means building robust first-party data, developing frameworks across audiences, creative, and UX, and feeding that into AI to enhance. It means accepting that the keyword era is giving way to something fundamentally different.
The practitioners who treat strategy as their primary lever, who invest their energy in architecture and design rather than lever-pulling, will be best positioned as this shift continues.
The keyword list isn’t gone. It’s no longer the center of the work. Strategy is.
Paid search is often the highest-leverage ecommerce growth channel, delivering strong conversion rates and efficient spend when structured effectively.
Google Shopping and Amazon Ads capture high-intent demand while generating the data needed to scale it. These platforms connect search queries directly to revenue, enabling you to identify which terms drive sales and allocate budget accordingly.
The real challenge is organizing campaigns to act on that signal.
Why paid search works so well for ecommerce
Paid search performs differently from other channels because it combines two advantages: intent and data.
Intent: Google and Amazon are search-driven environments. When someone searches for a product, they’re signaling exactly what they want. There’s no inference required, no audience modeling, and no interrupting someone mid-scroll. You’re providing the answer to a question the customer is already asking.
Data: Both Google Shopping and Amazon Ads provide keyword-level revenue data that most other advertising platforms can’t. You can see which search terms generated sales, at what conversion rate, and at what cost. Amazon goes further, offering clearer and more direct revenue visibility at the product and category level.
Together, these create a powerful feedback loop. Search terms tied to revenue let you shift spend toward higher-converting queries, improving ROAS over time. On Amazon, this loop extends further—stronger conversion rates can improve organic rankings, lowering future acquisition costs.
Success in search campaigns depends on building multi-funnel structures. The concept is consistent across platforms, but implementation varies by campaign types, settings, and bidding strategies.
The architectures outlined below use wide-net, low-cost discovery campaigns to map the full search landscape, then funnel high-intent, proven converters into dedicated performance campaigns with appropriate bids. The result: stronger ROAS, improved rankings, and more scalable growth.
The priority sculpting method is based on Martin Roettgerding‘s approach, with adaptations over the years. It uses a three-layer campaign structure to route keywords into different campaigns based on performance.
This lets you control spend on discovery keywords and maximize investment in high-performing, high-intent terms. The key is Google Shopping priority settings — “high-priority” campaigns serve first at lower bids.
Layer 1: Brand
The goal is to capture branded search traffic.
This layer uses a Performance Max campaign and can also use standard Shopping.
It remains assetless to keep it focused on Shopping inventory and prevent bleed into Display and YouTube.
It’s set with a high ROAS target, as PMax tends to go after brand traffic naturally, especially when set with a high target ROAS.
Alpha terms are negatived in this campaign, as they may also have high ROAS.
Layer 2: Catch-all
The goal is to cast a wide net, test search terms cheaply, and generate conversion data.
This layer uses standard Shopping with a high-priority setting to catch non-branded traffic.
Bids are kept low to control costs.
Brand terms and alpha terms are negatived using a negative list.
Over time, low-performing terms are also negatived once they’ve been tested and failed.
Layer 3: Alpha
The goal is to dedicate budget to best-performing terms and generate strong ROAS.
This layer uses standard Shopping with a low-priority setting and high-ROAS bidding settings.
By negating converted terms, or alpha terms, in the catch-all campaign, those queries fall through to this campaign, where you bid aggressively on what’s already working.
The key considerations in this structure include the following:
Routing logic using negatives
The system relies on routing logic: Google’s priority settings determine which campaign serves a query first. Negative keywords in the catch-all push proven converters into the alpha, where bids are higher and budget is protected. At the same time, non-alpha terms run through high-priority campaigns at the lowest possible bids.
The method lives or dies on weekly search term negation. Two actions are done regularly:
Negate non-converting terms in the catch-all. A good rule of thumb is over 20 clicks and zero conversions, these terms are negated. We’ve tested them, and removing them frees up the budget for other search terms. Note that this requires consideration before negating. If a keyword is highly relevant, you might want to let it run longer.
Negate converted terms (alphas) from the catch-all so they fall through to the alpha campaign. Over time, the alpha accumulates a curated list of proven terms bid on aggressively, while the catch-all keeps finding new ones cheaply. It’s a compounding system.
Shared budgets
Shared budgets are critical. Layers 2 and 3 should work on a shared budget.
The system works only if they run together, because each query needs to be sculpted through the system. It won’t work with separate budgets because if the budget on the catch-all high priority runs out, then the alpha would be the first contact, and the query would likely show on the alpha (at a higher bid), even though it’s not an alpha.
SKU separation
The system is designed to run across a unique set of SKUs. All three layers should target the same set of SKUs. It’s recommended to start with all SKUs to begin with and then build out from there.
Products that get buried in the main campaigns or operate at a different margin tier can be peeled off into their own mirrored catch-all/alpha pair, ring-fencing their budget. Only do this when there’s a clear reason. More campaigns mean more overhead and more fragmented data.
Feed quality
It’s important to optimize the feed, as Google heavily relies on titles mainly for understanding the context of the product and which keywords to serve it.
Amazon’s campaign structure is more advanced than Google Ads and offers several advantages.
Amazon typically delivers higher conversion rates and more conversion data. Ad spend also drives both conversion rates and rankings, with a clear, measurable link between ad spend and organic ranking.
Ads drive traffic, traffic drives conversions, and conversion rate drives organic rank. That makes Amazon Ads an investment in organic search.
Google Ads campaigns run across the whole catalog. On Amazon, you build campaigns at the SKU level, typically one SKU per campaign.
The structure uses three campaign tiers: research, ranking, and performance. Each has a distinct goal and is managed by adjusting advertising cost of sale (ACOS) targets to reflect different profitability goals.
Tier 1: Research
Campaigns use broad and phrase match keywords, along with automatic targeting.
The goal is to cast a wide net and generate keyword ideas and variations.
ACOS tolerance is relatively high, since the goal is data, not profit.
Tier 2: Performance
Campaigns use exact match keyword targeting.
The goal is profit, with a competitive ACOS target below break-even.
Move proven converters from the research tier into exact match campaigns. Run your best keywords at efficient bids to maximize returns on what’s already working. This mirrors the alpha campaign in Google Ads.
Tier 3: Ranking or exposure
Use single-keyword campaigns (SKCs) with exact match—one keyword per campaign.
The goal is usually ranking, though it can shift over time.
For ranking, set aggressive bids with high ACOS tolerance (often 50%+). Push volume through high-value keywords to drive top organic positions. Once you reach positions 1–3 organically, pause those keywords.
Ranking campaigns are debated. If you’re already ranking, there’s no need to pay for visibility you get for free.
This layer doesn’t exist in Google Ads, where ad spend doesn’t influence rankings.
With Amazon Ads, we bid toward an ACOS target. ACOS is the advertising spend as a percentage of revenue. Because Amazon data is so clean and conversion rates are high, we can calculate our bids to drive a certain ACOS.
The ACOS-based bidding formula:
Target bid = (Revenue per click) x Target ACOS
Implementing ACOS bidding can be automated using software like Scale Insights. Different campaign tiers can be assigned different ACOS targets, and CPCs can be adjusted daily by the software.
Keyword routing
Similar to Google Ads, keywords are funneled through from research campaigns into performance or alpha campaigns. This can be done manually or automatically with Scale Insights using an import rule.
The concept is very similar in that keywords that shine get imported down the funnel, while non-performing keywords are phased out through testing.
The conversion rate signal
If a product’s conversion rate is below the market average on a given keyword, more spend will not likely improve its rank. Amazon usually surfaces the better-converting product.
The correct response is to fix the underlying issue: price, listing quality, imagery, or the product itself. Most advertisers skip this step and keep spending into a hole.
The ranking cannibalization rule
There are two strong views on ranking and cannibalization. Some argue that once your product ranks highly for a keyword on Amazon, you should reduce or stop ad spend. If you’re ranking organically, you can save on ads.
On the other hand, if a keyword performs well with strong ROAS, having two listings can outperform one. It increases your chances of a click. Ads also typically appear above organic listings, giving you higher placement.
Whichever view you take, the three-tier method lets you drive rankings through SKCs, then reduce or stop ad spend once you rank, if you choose.
How Google Shopping and Amazon Ads compare for ecommerce
The underlying logic for advanced campaign setup is the same across Google Shopping and Amazon Ads, with key differences beyond the core structure.
Google Shopping (Priority sculpting)
Amazon Ads (Multi-tier architecture)
Similarities
– Route queries to campaigns via priority and negatives. – Discover converting terms in a catch-all at a low cost. – Graduate proven terms to alpha with high tROAS. – Regular search term reviews, negatives, and alphas.
– Route keywords across research → ranking → performance. – Discover new keywords in broad, phrase, and auto campaigns. – Graduate proven terms to exact match for profitability. – Regular search term reviews, negatives, and imports to lower funnel.
Differences
– Run across the whole feed, separate high-margin products for ring-fenced budgets. – ROAS-based bidding. – Product feed determines search term targeting, and the advertiser is unable to select.
– Campaigns built at the SKU level rather than across the whole catalog. – ACOS-based bidding. – Search terms selected by advertiser. – Ads drive rankings, and you can save budget by monitoring organic rankings.
Which platform is right for your ecommerce strategy
Like all good answers, it depends heavily on your business and your goals. Both have advantages and disadvantages. We can say that:
Amazon Ads often perform better, delivering higher conversion rates and faster ranking and sales when intent is strong.
Google Ads is better for long-term brand building. It offers broader reach, potentially lower costs, and drives traffic to your own website, where you retain customer data.
The ideal is to run these together. Many brands may launch on Amazon and grow over to their own platforms and utilize Google Ads.
Paid search for ecommerce is probably the most effective advertising avenue you can explore. Both platforms offer significant opportunities when implemented properly. Each platform has pros and cons, and I would recommend further exploring the details in these campaign structures and deciding on the right implementation for your business.
It used to be that Google searches opened up a world of questions. You searched, sifted through links, and came to your own conclusion.
Today, AI Overviews, ChatGPT, Perplexity, and other AI platforms compress multiple sources into a single, synthesized response. In the process, nuance is flattened, and certain viewpoints can be overrepresented.
This marks a fundamental shift in online reputation management. Search engines now shape the information they surface. The result is a rise in zero-click behavior, where users accept AI-generated answers without visiting underlying sources.
For brands, that changes the stakes. Visibility no longer guarantees influence. Even a No. 1 ranking can be bypassed if the narrative tells a different story.
AI narrative formation: How AI systems deliver users their answers
AI search engines now follow a new pattern for delivering answers. For the sake of this article, we’ll call it AI narrative formation. Here’s how it works.
Source pooling
AI systems pull from a wide range of sources. While you might expect trusted, peer-reviewed content, they often draw from Reddit, YouTube, review platforms, complaint forums, and social media sites like Instagram and TikTok.
Signal weighting
Not all sources carry equal weight. A single trusted source can be outweighed by a large volume of lower-quality content. For example, a highly active Reddit thread filled with negative reviews may outperform a fact-checked source like Wikipedia.
Narrative compression
AI condenses dozens of inputs into a short, digestible summary. In the process, nuance is lost, and fringe cases can become dominant themes. A complex reputation may be reduced to: “Users say this company is not trustworthy.”
Continued reinforcement
These summaries don’t stay contained. They’re screenshotted, shared, and repeated across platforms. Those repetitions become new inputs, reinforcing the same narrative in future AI outputs.
How a finance company’s solid reputation unraveled in AI search
To see how AI narrative formation works in action, let’s look at a use case.
My company recently worked with a finance organization to repair its online reputation. For this example, we’ll call it Company X.
Problems emerged for Company X with the rise of Google AI Overview. Previously, under traditional SERPs, Company X had a solid reputation. Users searching Google for reviews would find a 4.2 rating on Trustpilot, a strong company website with employee bios, and numerous positive blog reviews from trusted sources.
Google AI Overview changed that. How? By resurfacing an old Reddit forum centered on negative complaints about Company X.
When users asked Google, “What are opinions like about Company X?” AI Overview delivered a clear answer: “Company X has mixed reviews, with specific complaints regarding customer service.” But those customer service issues were resolved nearly a decade ago.
AI Overview pulled multiple reviews from that Reddit thread, combined them with strong negative phrasing, and factored in the lack of structured positive content to form a semi-negative impression. A new perception of Company X was created.
We can dig deeper into how AI impacts reputational risk. Consider the following:
How negative AI narratives spread: In traditional search, users had to dig for negative results. With LLMs, those results can surface instantly, even when they’re defamatory or incorrect.
Hallucinations and misinformation: Most users are now aware of AI hallucinations, but they aren’t always easy to spot. Making matters worse, LLMs can present incorrect claims or factual inconsistencies with confidence.
The snowball effect: As discussed in narrative reinforcement, AI-generated answers get screenshotted, shared, and repeated across platforms. That repetition builds momentum, creating challenges ORM firms now have to manage.
A hard truth has emerged in ORM: The most accurate claim doesn’t rise to the top. The most repeated claim does.
A step-by-step guide to auditing AI-generated narrative formation
Let’s walk through another case to see how an AI-generated narrative can be audited.
CEO X is the founder of a SaaS company. He has an ongoing thought leadership presence and a strong reputation in his industry.
On a recent podcast appearance, one quote was taken out of context and aggregated across several platforms. The quote was framed as an opinion rather than a fact. Blog posts were written, and Instagram Live reactions spread online.
In no time, ChatGPT and Google AI Overview turned CEO X into a controversial figure.
Here’s a step-by-step guide to approaching that reputation management crisis.
Step 1: Mapping queries
We begin by identifying what search engines are saying about CEO X. We ask ChatGPT and Google AI Overview questions such as “What did CEO X say?” and “What is CEO X’s current reputation?” This helps us analyze the issues.
Step 2: Capturing outputs
We identify the claims associated with CEO X. Google AI Overview and ChatGPT describe CEO X as a controversial figure who recently made comments in poor taste. The narrative formed across both platforms is trending negative.
Step 3: Delving through sources
Next, we analyze the sources AI Overviews and ChatGPT rely on. We look for whether they’re outdated, repetitive, or low quality. (In the case of Company X, the latter two apply.)
Step 4: Analyzing the narrative gap
We identify the gap between AI’s narrative and reality.
What are CEO X’s actual views?
What was the context of the quote?
And what has their reputation been up to this point?
Step 5: Correcting and replacing sources
The final step is to replace or respond to those negative sources. Claims can be addressed directly on Reddit, Instagram, or other platforms spreading the narrative. Structured explanations should also be published through FAQs and policies, while strengthening third-party validation.
Focusing solely on SEO rankings is no longer enough. We need to think in terms of narrative shifts and framing. That also means thinking in terms of inputs and outputs.
Users aren’t evaluating individual pages. They’re engaging with AI-generated answers. Rather than managing what users find, we need to manage the answers AI systems deliver. That means strengthening what those systems rely on:
Publishing high-quality first-party content.
Earning credible third-party mentions.
Reinforcing positive customer reviews.
Addressing misinformation directly.
Improving structured data.
Maintaining accurate Wikipedia or Wikidata entries where applicable.
The new ChatGPT ad format is standardizing, according to a new Adthena analysis of 40,000+ daily placements. What once felt experimental is becoming a disciplined, high-intent system for users already deep in decision mode.
The big picture:ChatGPT ads are converging on a short, structured, highly contextual style that favors precision over persuasion and utility over storytelling, marking a shift from creative-led advertising to real-time, intent-driven assistance.
By the numbers. Every word must carry weight and contribute directly to clarity or conversion:
The average headline clocks in at just 30 characters and around 5 words.
Body copy averages 116 characters and roughly 19 words.
What’s working. The dominant pattern is a “Brand: Benefit” headline, separating the name from a specific value. It works because users in conversational environments expect immediate clarity, not intrigue or ambiguity.
Almost every ad leads with the brand name. You need easy recall in a setting where users are already evaluating options, not discovering them.
Headlines are compressed. Headlines often read like functional labels rather than slogans. This brevity carries into the body copy. It typically uses two tight sentences: a proof point followed by an offer or nudge, showing you’re not trying to win an argument but give one compelling reason to act.
Context mirroring is a defining feature. The strongest ads directly reflect the user’s query or situation, signaling real-time tailoring. This marks a new level of AI-native targeting that goes beyond keyword matching into conversational relevance.
Concrete value signals carry outsized weight. Dollar signs and specific numbers — prices, savings, performance — consistently outperform vague claims. Numbers dominate body copy because they feel credible and native in a setting where you’re actively researching and comparing options.
Offers. Low-friction offers — especially “free” trials or demos — are the most common conversion lever, reducing commitment barriers while users are exploring.
Calls to action. These are explicit and action-oriented, favoring direct phrases like “Shop now,” “Compare,” or “Book” while abandoning generic prompts like “Learn more.”
The overall tone. Calm, confident, and measured, with minimal exclamation points or question marks. It aligns more with helpful guidance than ad hype, helping ads blend into the conversational flow rather than disrupt it.
Why we care. ChatGPT ads reach users at high intent, where clarity and relevance matter more than creativity or storytelling. In a conversational environment, ads compete with useful answers, so vague or overly branded messages get ignored while precise, value-driven copy performs better. This shift rewards short, structured messaging and gives early adopters an advantage as the format standardizes.
Between the lines. While ChatGPT ads share DNA with paid search — especially in their focus on intent and relevance — they differ by integrating into dialogue, responding to high-intent users, and delivering messaging that feels assistive rather than interruptive.
The takeaway. Success in ChatGPT advertising depends on precision, relevance, and credibility over creativity, emotional appeal, or brand-led storytelling. The winning strategy: fit in perfectly when a user needs a clear, trustworthy answer.
The analysis. Adthena CMO Ashley Fletcher shared the data on LinkedIn.
There’s a flood coming. A downpour of noise — more content, more channels, more AI-generated everything, moving faster than most teams can keep up with. Somewhere in that volume, your customers are quietly drowning — overwhelmed, underserved, and one bad experience away from choosing someone else.
You’ve probably felt it on your team, too. Another tool. Another sprint. Another quarter of doing more with less. The productivity metrics look fine from the outside. But inside, people are running on empty.
There’s an old story about a man named Noah who, facing catastrophic disruption, didn’t freeze or panic. He didn’t look for shortcuts or try to outswim the storm. He built — with intention, with a clear design, and with people he trusted. When the waters rose, the ark held.
The brands that lead don’t adopt the most technology the fastest. They build with intention — designing systems and experiences that protect people.
What follows is the case for building your ark — and a practical framework to do it.
AI power users report that it makes their overwhelming workload more manageable (92%), boosts creativity (92%), and helps them focus on their most important work (93%), per Microsoft and LinkedIn’s Work Trend Index,.
Yet, 60% of leaders say their company lacks a concrete AI vision or plan — meaning the very tool that could relieve team burnout is sitting underutilized.
That gap shows up in real ways.
For customers, it creates friction — too many choices, unclear navigation, and messaging that misses where they are. They arrive with a question and leave with more confusion. They don’t feel seen or helped.
For marketing teams, the impact is quieter but just as serious:
Decision fatigue disguised as strategy.
Tool overload framed as innovation.
Burnout that looks like productivity — until it doesn’t.
Fragmented workflows that drain energy faster than they produce results.
Brands that recognize these human issues move faster, retain stronger talent, build deeper customer loyalty, and drive better business outcomes. Enter what I call the wellness sweet spot.
The wellness sweet spot is the moment where AI, empathy, and human-first design converge — creating conditions where both your customers and your team can think clearly, act confidently, and trust the experience they’re in.
It’s an architectural decision about how your entire marketing ecosystem is designed to make people feel. When its three pillars are genuinely working together, four things become true simultaneously:
AI reduces waste and cognitive load in the experience — making things simpler.
Emotional friction is intentionally minimized at every touchpoint.
Marketing teams operate from a foundation of wellness (and well-being).
Systems and workflows support human thriving, not just throughput.
When these conditions are in place, something shifts. AI stops feeling like a disruption and starts working as a stabilizing layer — supporting, protecting, and quietly holding the system together. It manages the overwhelm. The ark keeps floating.
Most marketing leaders still think about AI in terms of what it does — automate, generate, optimize, analyze. Those outcomes matter, but they don’t tell the full story. The more consequential question is how AI makes people feel while it’s doing those things.
For customers, AI used well is a guide that:
Summarizes complexity without dumbing it down.
Narrows choices in ways that feel helpful rather than manipulative.
Anticipates what someone needs next and removes ambiguity from decision paths.
Saves time — which is, in a very real sense, saving emotional energy.
For teams, thoughtfully deployed AI absorbs the work that depletes people most: the repetitive, the reactive, and the administrative. It creates space for what human brains do best: strategy, creativity, relationship-building, and nuanced judgment.
When you build your marketing systems around it, the output quality goes up because the people producing it aren’t running on fumes.
This is empathy at scale. Not the kind that lives in a tagline, but the kind that’s baked into how your systems are structured and how your content is designed to reach people.
The new emotional metrics: What to measure when you start caring about feelings
This is where things get practical and start to move ahead of the curve. Most marketing dashboards show what happened — click-through rates, conversion rates, and time on page. Those metrics matter, but they don’t explain why someone left or how they felt along the way.
Emotional metrics help fill that gap by focusing on the conditions under which decisions are made. Research in psychology and neuroscience shows that people make better decisions, build stronger brand relationships, and become more loyal when they feel clear, confident, and calm.
Here’s how traditional metrics map to emotional KPIs:
Traditional metric
Emotional KPI
What it measures, reimagined
Time on page
Clarity index
How quickly someone finds what they need — without confusion
Conversion rate
Decision effort score
Cognitive load required to complete an action
Engagement rate
Customer calm markers
Behavioral signals of confidence, not stress (Qualified attention)
Team output volume
Wellness throughput
Strategic output produced with reduced burnout
These are upstream indicators that help explain downstream performance. A low clarity index often shows up as stalled conversion rates. A high decision effort score can lead to rising cart abandonment. Declining wellness throughput tends to result in average output from top strategists.
Brands that start tracking these now gain an advantage over those that wait to react.
5 steps to design toward your wellness sweet spot
A caution before the roadmap: more speed and scale applied to a broken system will not fix it. It will amplify everything that’s wrong with it. These five steps are meant to be done before you push harder on AI adoption.
Step 1: Run an empathy audit
Where are customers confused? Hesitating? Leaving? Map these moments using behavioral data combined with qualitative insight — customer interviews, session recordings, support tickets, search data. Focus less on what people clicked and more on where they felt lost.
Step 2: Simplify for cognitive ease
Fewer choices. Plain language. Cleaner navigation. Every step you remove from a decision path is a small act of respect for your customer’s mental energy. This is generous. It’s designing with intelligence.
Step 3: Use AI as a shepherd
Deploy AI to enhance orientation, clarity, and confidence. Don’t push aggressive automation or manufacture a sense of urgency. AI should make customers feel helped, not herded. There’s a difference, and your audience feels it.
Step 4: Rebuild team workflows around energy
Audit where your team’s cognitive energy actually goes each week. Identify the work that is routine, reactive, or repetitive — and build AI into those gaps first. Protect the hours that require human judgment, creativity, and relationship-building. Those are the hours that drive real growth.
Step 5: Measure the feels
Begin tracking emotional outcomes alongside performance metrics. Start simple: add a one-question post-interaction survey.
Review search data for confusion signals. For example, growing volume for “how do I” or “why can’t I” phrases on your own site may indicate your content isn’t answering questions before they’re asked.
Monitor support ticket themes for friction patterns. A perfect measurement system isn’t required to start. The intention to look is.
The future belongs to emotionally intelligent brands
In a market where nearly every brand claims to be customer-centric and frictionless, the real differentiator comes down to how people feel and whether systems consistently deliver on that promise.
Leading organizations don’t rely on bigger AI budgets. They align technology with clear intent, prioritize well-timed, empathy-led content over volume, treat customer well-being as part of the brand promise, and protect their teams’ energy as rigorously as performance.
Creating value starts with protecting the people who create it. Noah didn’t survive the flood by ignoring it or fearing it. He paid attention, took action, and built with intention — something designed to carry what mattered most: his people, his purpose, his peace, and his future. That’s the kind of leadership this moment calls for.
You don’t have to figure this out alone. The tools are here. The framework is yours. The decision is whether to build before the pressure hits or react once it’s already underway.
You’ve done everything right. You have a fast website with comprehensive content, pages ranking in the top 10, and a strong backlink profile. Yet when you search the query you rank for, your site doesn’t appear in Google’s corresponding AI Overview.
This is a retrieval problem, not a ranking issue. And the difference between the two is the most important shift SEOs need to understand right now.
AI Overviews don’t work like traditional organic rankings. Instead of considering which page has the most signals, AI Overviews look for the page that gives the cleanest, most usable answer.
If your content doesn’t meet that standard, your traditional search ranking is irrelevant. Here’s what’s going wrong, and how to fix it so your content appears in more AI Overviews.
The ranking-citation gap is real — and growing
The overlap between AI Overview citations and organic rankings grew from 32.3% to 54.5% between May 2024 and September 2025, according to a BrightEdge study.
This trend sounds encouraging. But it also means that even at peak convergence, nearly half of all AI Overview citations come from pages that don’t rank at the top of organic results. Google actively bypasses higher-ranking pages when it finds content that better serves the AI Overview format.
The pattern varies sharply by sector, though. BrightEdge data shows that in ecommerce, the overlap barely changed, remaining essentially flat over the entire 16-month period. And in your money or your life (YMYL) categories like healthcare, insurance, and education, the overlap between AI Overview citations and organic rankings ranges from 68% to 75%.
Ranking and visibility are no longer the same thing. You can rank second and be invisible. Or, you can rank on the second page and be the first thing a searcher reads.
1. Your content answers the wrong version of the question
Informational queries — specifically long-tail and conversational searches — typically trigger AI Overviews. Informational queries drive 57% of AI Overviews, while commercial queries trigger this AI feature far less frequently, according to Semrush research.
Google’s AI engine looks for content that matches what the user asks, not just the keyword you’ve targeted. So, an AI Overview answering the query “what’s the best way to manage a remote team’s workload?” probably won’t cite a page that ranks for the keyword “project management software” and leads with features and pricing.
2. You’ve buried the answer
If your introduction spends three paragraphs establishing context, warming up the reader, or restating the question before answering it, the retrieval system moves on. It seeks information it can extract cleanly. If that answer isn’t near the top of the page, the system skips that page.
3. Your structure is opaque to AI systems
Traditional SEO content is built around comprehensive long-form content: 3,000-word guides covering every angle of a topic, written for readers who scroll and skim.
AI retrieval systems don’t work the same way. They need to identify discrete, self-contained answers within your content.
That requires clear heading hierarchies, short paragraphs, and content that AI systems can extract. A section under a specific heading should completely answer the question posed in that heading, without requiring the surrounding context to make sense.
Content written as one long, unbroken narrative is harder for AI systems to parse. Even if every word is accurate and authoritative, it may not earn a citation if the structure doesn’t help the retrieval system identify individual answer units.
4. Your E-E-A-T signals aren’t visible at the content level
Google has been clear that experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals are important for content quality in traditional search. It likely matters for AI Overviews, too. But these signals need to appear in the content itself, not just in your domain profile or link graph.
Strong domain authority counts for less than you’d think if the content itself carries no credibility signals.
Who wrote it?
Where did the data come from?
Is there anything here that couldn’t have been written by someone who’d never worked in this field?
A retrieval system evaluating an individual page doesn’t know your domain’s track record. The page must make the case for itself.
Content-level E-E-A-T signals are particularly important in YMYL categories, where AI Overviews are selective about sources because the risk of misinformation is higher.
5. You’re targeting queries that don’t trigger AI Overviews
Before optimizing your content for AI engines, it’s worth checking whether your target queries trigger AI Overviews at all. As of late 2025, AI Overviews appear in 16% of search results, though that figure isn’t evenly distributed across query types.
Transactional queries, navigational searches, branded queries, and highly local searches are far less likely to trigger an AI Overview. If most of your traffic comes from commercial or transactional keywords, the lack of AI Overview citation may not be a content problem. It may simply be that those query types are less likely to generate overviews in the first place.
What the data tells us about the impact of this shift
The stakes are significant. Research by Seer Interactive shows that organic click-through rates (CTRs) for informational queries that displayed AI Overviews dropped 61%, from 1.76% to 0.61%, between June 2024 and September 2025. Paid CTR fell even further, from 19.7% to 6.34%.
But the same research reveals a critical asymmetry: Brands cited in AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than when they weren’t cited. A citation in an AI Overview doesn’t just protect you from a CTR decline. It actively amplifies your visibility.
The Pew Research Center’s study of searches by U.S. adults in March 2025 found that only 8% of users who encountered an AI Overview clicked a traditional search result, compared to 15% who clicked when no overview appeared. And 26% of searches with AI Overviews resulted in no clicks at all.
If AI Overviews appear for your most valuable queries and you aren’t cited, you aren’t just missing out on the overview. You’re losing clicks you previously received from the organic listing underneath it.
How to optimize for retrieval, not just rankings
These trends require you to adjust how you think about content structure and intent. Here’s where to focus:
Rewrite your introductions: Your first paragraph should directly and completely answer the primary question of the page. Save context and elaboration for later sections. Write as if the first 100 words of your page represent a standalone answer.
Restructure your headings: Each heading should be a question or a complete, specific claim. The following section should fully answer or support that heading without requiring the reader to review previous sections. Think of each section as a self-contained answer unit.
Add explicit expertise signals: Include author attribution with credentials, first-person experience language, original data, and links to primary sources and original research. These signals matter at the content level, not just at the domain level.
Audit your query triggers: Manually test your target queries in Google to see which ones actually generate AI Overviews. For those that do, study how the cited sources are structured, the length of the cited sections, and the format of the answer. Use that as your editorial brief.
Expand your topical coverage: AI Overviews favor sources that demonstrate breadth of knowledge across a topic, not just single-page depth. Focus on answering several related questions well instead of building one exceptional page surrounded by thin content.
What AI Overviews represent is something that’s been discussed for years, but few have truly prepared for: the separation of content quality from ranking signals.
For two decades, we used rankings as a proxy for quality. High-ranking content was, by definition, good enough.
But that assumption no longer holds. Ranking in traditional search indicates that your brand has authority and that your page is relevant to the search query. It says nothing about whether your content is structured in a way that AI retrieval systems can use.
Visibility now goes to whoever understands how AI systems identify, extract, and surface answers. A strong backlink profile won’t help you if the answer is buried on page three of a 4,000-word guide.
Ranking in the top 10 is still worth pursuing. But it’s no longer the whole game.
Your paid social operation is on fire. You know how your audience thinks, the creative process is dialed in, and the results get better every year. Leadership greenlights an expansion to Google Ads — a new channel and, critically, a new source of revenue.
As it turns out, applying that same strategy really just buys you an express ticket to a very difficult conversation.
Google rewards a different kind of thinking. Intent signals and campaign logic are different, and the mistakes that eat at your budget don’t always make themselves clear. Brands that apply their existing Meta playbook often find themselves looking at shiny dashboards and dull balance sheets.
These six common mistakes tend to do the most damage before anyone realizes what’s happening. They’re what we see most often when ecommerce brands come to us after making the move to Google — and they can all be reversed.
Mistake 1: Treating Google like a retention channel
You can definitely use Google Ads to support retention and brand defense. The problem is when that becomes your whole strategy.
We see this regularly with brands new to the platform who launch directly into Performance Max. Early ROAS looks strong, and everyone’s happy. But a few months in, someone asks the right question: Are we actually growing, or paying to capture purchases that were going to happen anyway?
One client we worked with came to us with branded search and retargeting doing the heavy lifting inside PMax – essentially a tax on demand that had already been created elsewhere. Revenue flatlined because, while the ad spend was real, growth was not.
Net-new customer acquisition requires a different setup.
Shopping campaigns structured to surface products to people who have never heard of the brand.
Search campaigns built around non-branded, high-intent keywords.
Layered PMax configurations that limit the system from defaulting to the easiest conversions.
When Google has enormous reach into new audiences, treating it purely as a closing channel leaves most of that opportunity untouched.
Mistake 2: Not knowing how to get the most out of Google’s core levers
Paid social experience transfers to Google in some ways, but there are four areas where we see the biggest knowledge gaps.
Search intent
Ads on social media are an interrupting moment. Ads in search engines meet people as they’re looking for something you offer. This changes so much about campaign structure, ad copy, and keyword targeting.
Upper-funnel terms and lower-funnel terms require different approaches, bids, and landing pages. Collapsing them into a single campaign structure is one of the fastest ways to dilute intent and waste budget on traffic that was never going to convert.
Data feed optimization
For ecommerce brands running Shopping and retail Performance Max, the product feed is the foundation everything else is built on. Weak titles, missing attributes, and poor categorization limit how often your products show up and who sees them.
Most brands (including Google-native ones) underinvest here because the work is unglamorous. But a well-optimized feed consistently outperforms one that’s neglected after setup.
Keyword research
Paid search is a keyword-driven channel, which makes keyword strategy its own discipline. Understand match types, search volume, commercial intent, and the relationship between what people type and what they actually want. This takes time to develop, but brands that skip this step usually over-restrict their reach or bleed spend on irrelevant traffic.
Landing pages
Sending high-intent but unfamiliar visitors straight to a product page on Google often underperforms. A more engaging landing page format, like an advertorial, puts that traffic in front of context and trust before asking for the sale.
Brands coming from paid social often overlook this because the funnel architecture they’re used to doesn’t require it.
Google’s algorithms need consistent data to make the best decisions for your account. But every time a campaign goes dark — for a day or a week — there’s a risk that the learning resets. What feels like a minor admin issue can mean weeks of degraded performance and wasted ad spend.
Two types of disruption come up more than any other.
Payments: Brands switching to invoice billing or changing card details mid-flight will sometimes see campaigns pause without realizing it until the damage is done. A lapsed payment that takes three days to resolve can cost far more than the bill itself once you factor in recovery time.
Tracking and feed integrity: A broken pixel means no conversion data, and forces Smart Bidding to optimize blind. A feed error in Merchant Center means products disappear from Shopping and Performance Max. Neither of these failures are loud, and they tend to surface slowly as declining performance that gets misattributed.
They are both preventable with automated alerts, weekly feed audits, and a person or AI agent responsible for monitoring account health between reporting cycles. The cost of oversight is low compared to what happens if you only discover issues after the fact.
Mistake 4: Building a campaign structure that’s too granular
The instinct among detail-oriented advertisers is to segment everything because it feels like control on the surface.
One campaign per product category.
One ad group per keyword.
Separate budgets for every audience.
But Google’s automation needs data to make good decisions. When you spread your budget across too many campaigns, each one operates on thin resources and even thinner information. Smart Bidding can’t optimize effectively without sufficient conversion volume, so campaigns stuck below that threshold tend to underperform and stay there.
By over-segmenting, you’ve created the appearance of precision while actually limiting the system’s ability to learn.
The same logic applies to budget. Ten campaigns with a modest shared budget will almost always produce worse results than three well-funded ones. Google needs room to test, adjust, and find the traffic worth paying for. Fragmented budgets don’t allow it to do that.
Build a tighter structure with fewer campaigns, clearly defined goals, and enough budget to compete. This gives the algorithm what it needs while keeping the account manageable enough to oversee effectively.
Mistake 5: Leaving campaigns on Max Conversion Value with no ROAS targets
Max Conversion Value is a Smart Bidding strategy that tells Google to spend your budget in whatever way generates the highest total conversion amount – no ceiling, no floor, no efficiency guardrail. Left unsupervised, it will find conversions, but won’t care what it costs to get them.
For brands new to Google Ads, this setting can trick you into thinking you’re crushing it. Conversion value shoots up in the right direction, making the account appear healthy. The problem surfaces when you look at what you actually spent to generate that value.
Without a target ROAS, Google has no efficiency quotient, and optimizes for volume, not profitability. But the fix is straightforward.
Once you have enough conversion data, set a realistic target.
A ROAS goal gives the algorithm a constraint, and shifts the objective from spending budget to spending it well.
Targets set too aggressively too early can starve campaigns of traffic before they’ve had a chance to learn.
Exercise patience, and a willingness to adjust gradually rather than chasing the ideal number from day one.
Mistake 6: Underfunding campaigns and keeping them stuck in learning
When you launch a Google campaign or make a significant change (like doubling the budget), it enters a new learning period. This is the window for gathering data, testing different auctions, and calibrating toward the conversion patterns you’ve defined.
It’s a normal part of how the platform works, and every campaign goes through it.
But the learning period requires a minimum volume of conversions to complete. Google typically needs around 30-50 conversion events in a short window before bidding stabilizes. A campaign that’s underfunded for this milestone will stay in learning indefinitely.
It’s a common trap for brands being cautious when testing Google.
You run your first campaign on a small budget.
CPAs are inflated, and data is inconclusive, so you don’t invest more or cut it entirely.
In reality, the campaign never had what it needed to graduate out of the learning phase.
You walk away from net new revenue before you’ve even scratched its surface.
Funding a new campaign adequately from the start — even if it means consolidating into fewer campaigns and chasing fewer goals — gives it the best chance of learning fast and delivering accurate results sooner.
Adding Google to the mix is the right call: Here’s what to do next
Diversifying away from a single ad platform is one of the smartest moves an ecommerce brand can make once it’s mature enough to fight on two fronts. It removes growth from the anchor of one platform’s algorithm changes, auction dynamics, seasonality, terms of service, etc.
Adding Google to Meta also gives you access to a different kind of demand that is actively expressed rather than passively targeted, which is a meaningful advantage worth building on.
These six mistakes are not reasons you should avoid Google, but a preventative guide to help you approach it with realistic expectations and enough patience to let the system learn. Treating it like a direct analog of what you’re already doing on Meta will make you leave before seeing what’s truly possible.
Google launched a channel performance timeline view in Performance Max. It gives you a clearer breakdown of how Search, YouTube, Display, and other channels contribute to campaign results over time.
What’s new. A timeline graph shows channel-level contributions over a selected period, paired with investment and performance filters. You can quickly see which channels are pulling their weight — and which aren’t.
Yellow box – Channel Performance Evolution Over Time
Pink box (right) – All Ads, Ads Using Product Lists, Ads Using Video
Why we care. Performance Max campaigns run across multiple channels at once, making it difficult to see where your budget is most effective. This gives you a timeline view of channel-level contributions — so if YouTube is underperforming while Search drives most conversions, you can see it without digging through exports or relying on guesswork. You can spot channel-level trends earlier and adjust your asset strategy or budget accordingly.
The big picture. This view gives you a more actionable way to evaluate PMax performance without relying solely on Google’s automated decisions.
Bottom line. It’s not full transparency, but it’s a meaningful step in the right direction. You get a cleaner way to spot PMax trend anomalies early and adjust accordingly.
First spotted. This update was first spotted by Axel Falck, Head of Search at Le Mage du SEA, who shared it on LinkedIn.
Tracking your brand’s visibility in AI-powered search is the new frontier of SEO. The tools built to do this are expensive, often starting at $300 to $500 per month and quickly rising from there. For many, that price is a nonstarter, especially when custom testing needs go beyond what off-the-shelf software can handle.
I faced this exact problem. I needed a specific tool, and it didn’t exist at a price I could afford, so I decided to build it myself. I’m not a developer. I spent a weekend talking to an AI agent in plain English, and the result was a working AI search visibility tracker that does exactly what I need.
Below is the guide I wish I’d had when I started: a step-by-step playbook for building your own custom tool, covering the technology, the process, what broke, and how to get it right faster.
The problem: A custom tool for a complex landscape
My goal was to automate an AI engine optimization (AEO) testing protocol. This wasn’t just about checking one or two models. To get a full picture of AI-driven brand visibility, I knew from the start that we had to track five distinct, critical surfaces:
ChatGPT (via API): The most well-known conversational AI.
Claude (via API): A major competitor with a different response style.
On top of that, I needed to score the results using a custom 5-point rubric: brand name inclusion, accuracy, correctness of pricing, actionability, and quality of citations. No existing SaaS tool offered this exact combination of surfaces and custom scoring. The only path forward was to build.
Here are a few screenshots of the internal tool as it stands. You can see some of my frustration in the agent chat window.
The method: Using vibe coding to build the tool
This project was built using vibe coding, a way of turning natural language instructions into a working application with an AI agent. You focus on the goal, the “vibe,” and the AI handles the complex code.
This isn’t a fringe concept. With 84% of developers now using AI coding tools and a quarter of Y Combinator’s Winter 2025 startups being built with 95% AI-generated code, this method has become a viable way for non-developers to create powerful internal tools.
You can replicate this entire project with just three things, keeping your monthly cost under $100.
Replit Agent
This is a development environment that lives entirely in your web browser. Its AI agent lets you build and deploy applications just by describing what you want. You don’t need to install anything on your computer. The plan I used costs $20/month.
DataForSEO APIs
This was the backbone of the project. Their APIs let you pull data from all the different AI surfaces through a single, unified system.
You can get responses from models like ChatGPT and Claude, and pull the specific results from Google’s AI Mode and AI Overviews. It has pay-as-you-go pricing, so you only pay for what you use.
Direct LLM APIs (optional but recommended)
I also set up direct connections to the APIs for OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini). This was useful for double-checking results and debugging when something seemed off.
The playbook: A step-by-step guide to building your tool
Building with an AI agent is a partnership. The AI will only do what you ask, so your job is to be a clear and effective guide.
Here’s a repeatable framework that will help you avoid the biggest mistakes.
Step 1: Write a requirements document first
Before you even open Replit, create a simple text document that outlines exactly what you need. This is your blueprint. Include:
The core problem you’re solving.
Every feature you want (e.g., CSV upload, custom scoring, data export).
The data you’ll put in, and the reports you want out.
Any APIs you know you’ll need to connect to.
Start your conversation with the AI agent by uploading this document. It will serve as the foundation for the entire build.
Step 2: Ask the AI, ‘What am I missing?’
This is the most important step. After you provide your requirements, the AI has context. Now, ask it to find the blind spots. Use these exact questions:
“What am I not accounting for in this plan?”
“What technical issues should I know about?”
“How should data be stored so my results don’t disappear?”
That last question is critical. I didn’t ask it, and I lost a whole batch of test results because the agent hadn’t built a database to save them.
Step 3: Build one feature at a time and test it
Don’t ask the AI to build everything at once. Give it one small task, like “build a screen where I can upload a CSV file of prompts.”
Once the agent says it’s done, test that single feature. Does it work? Great. Now move to the next one.
This incremental approach makes it much easier to find and fix problems.
When it’s time to connect to an API like DataForSEO, don’t assume the AI knows how it works. Find the API documentation page for what you’re trying to do, and give the URL directly to the agent.
A simple instruction like, “Read the documentation at this URL to implement the authentication,” will save you hours of frustration. My first attempt at connecting failed because the agent guessed the wrong method.
Step 5: Save working versions
Before you ask for a major new feature, save a copy of your project. In Replit, this is called “forking.” New features can sometimes break old ones.
I learned this when the agent was working on my results table, and it accidentally broke the CSV upload feature that had been working perfectly. Having a saved version makes it easy to go back and see what changed.
Nearly everything will break at some point. That’s part of the process. Here are the most common issues I ran into, and the lessons I learned, so you can be prepared.
Problem
The lesson and how to fix it
1. API authentication fails
The agent will often try a generic method.
Fix: Give the agent the exact URL to the API’s authentication documentation.
2. Results disappear
The agent may not build a database by default, storing data in temporary memory instead.
Fix: In your first step, ask the agent to include a database for persistent storage.
3. API responses don’t show up
You might see data in your API provider’s dashboard, but it’s missing in your app. This is usually a parsing error.
Fix: Copy the raw JSON response from your API provider, and paste it into the chat. Say, “The app isn’t displaying this data. Find the error in the parsing logic.”
4. Model responses are cut short
An LLM like Claude might suddenly start giving one-word answers. This often means the token limit was accidentally changed.
Fix: After any update, run a quick test on all your connected AI surfaces to ensure the basic parameters haven’t changed.
5. API results don’t match the public version
ChatGPT’s public website provides web citations, but the API might not.
Fix: Realize that APIs often have different default settings. You may need to explicitly tell the agent to enable features like web search for the API call.
6. Citation URLs are unusable
Gemini’s API returned long, encoded redirect links instead of the final source URLs.
Fix: Inspect the raw data. You may need to ask the agent to build a post-processing step, like a redirect resolver, to clean up the data.
7. Your app isn’t updated
You build a great new feature, but it doesn’t seem to be working in the live app.
Fix: Understand the difference between your development environment and your production app. You need to explicitly “publish” or “deploy” your changes to make them live.
The real costs: Is it worth it?
Building this tool saved me a significant amount of money. Here’s a simple cost comparison against a mid-tier SaaS tool.
Item
DIY tool (My project)
SaaS alternative
Software subscription
~$20/month (Replit)
$500/month
API usage
~$60/month (variable)
Included
Total monthly cost
~$80/month
$500/month
The biggest cost is your time. I spent a weekend and several evenings building the first version. However, I now have an asset that I can modify and reuse for any client without my costs increasing.
The hidden costs are real: there’s no customer support, and you are responsible for maintenance. But for many, the savings and customization are worth it.
This approach isn’t for everyone. Here’s a simple guide to help you decide.
Build your own if:
You need a custom testing method that no SaaS tool offers.
You want a white-labeled tool for your agency.
Your budget is tight, but you have the time to invest in the process.
Stick with a SaaS tool if:
Your time is more valuable than the monthly subscription fee.
You need enterprise-level security and dedicated support.
Standard, off-the-shelf features are good enough for your needs.
For many SEOs, the answer is clear. The ability to build a tool that works exactly the way you do, for less than $100 a month, is a game-changer.
The process will be frustrating at times, but you will end up with something that gives you a unique advantage. The era of the practitioner-developer is here. It’s time to start building.
Google Ads added an auto-apply setting to experiments. It’s on by default, so winning variants can go live without review.
How it works. You choose directional results (default) or statistical significance at 80%, 85%, or 95% confidence. One safeguard: if your chosen success metric performs significantly worse in the test arm, the change won’t auto-apply.
Why we care. Experiments are one of the most powerful tools in your account. Automating apply can speed testing, but removes a checkpoint where you catch unintended consequences before they hit live campaigns.
The catch. Experiments allow only two success metrics. A third metric you care about — one you didn’t or couldn’t select — can decline unnoticed. Guardrails protect what you told Google to watch, not everything that matters.
Bottom line. Auto-apply is a reasonable shortcut for simple tests. For anything consequential, keep manual review. Run the experiment, reach significance, then review full data before you apply changes.
First seen. Google Ads specialist Bob Meijer shared this update on LinkedIn.
Bing appears to be testing an expanded sponsored products section in its shopping results, featuring a double-row carousel that takes up significantly more space than the current format.
The test. The format pairs a large, double-row sponsored carousel with organic cards from individual sites below.
Why we care. If this rolls out broadly, it means more screen space for sponsored products — typically leading to higher visibility and more clicks if you run Microsoft Shopping campaigns. The double-row carousel is also more visually competitive, bringing Bing’s shopping ads closer to Google Shopping’s prominence.
The catch. The test appears limited — not all users see it. Search industry veteran Mordy Oberstein reported a more compact layout, suggesting Bing is still in early testing.
Bottom line. Bing runs many SERP experiments that never fully launch, so watch this one for now. If you run Microsoft Shopping campaigns, monitor impressions for any lift if the format expands.
First spotted. Sachin Patel shared a screenshot of the test on X.
SEO tools were the most replaced martech application in 2025 — but not for the reason you might expect.
According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.
At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences — all of which challenge traditional keyword tracking and ranking-based workflows.
But the data tells a more nuanced story.
SEO tools: most replaced, but stabilizing
Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.
In other words, they’re now the most commonly replaced — but also more stable than before.
That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.
Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:
CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the survey’s history.
MAPs, email platforms, and CMS tools also declined compared to 2024.
Why SEO tools are being replaced
So if SEO tools aren’t being swapped out due to instability, what’s driving the changes?
The survey points to three primary factors:
1. AI capabilities
For the first time, the survey asked about AI’s role in replacement decisions — and the impact was significant.
37.1% cited AI capabilities as an important factor.
33.9% said they wanted AI capabilities when replacing a tool.
This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:
Content generation and optimization.
SERP analysis and intent modeling.
Workflow automation.
In many cases, replacing your SEO tool isn’t about abandoning SEO — it’s about upgrading to AI-native capabilities.
2. Cost pressures
Cost has become a major driver of martech replacement decisions, including SEO tools:
43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
That’s up sharply from 23% in 2024 and 22% in 2023.
This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.
3. Changing needs in a shifting search landscape
As search behavior changes, so do expectations for SEO platforms.
Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:
Surface insights across AI-driven SERPs
Track visibility beyond clicks
Integrate with broader marketing and data systems
That evolution is likely contributing to replacement activity — even as overall stability increases.
AI is reviving custom-built SEO tools
One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.
Replacing commercial martech tools with homegrown applications accounted for:
8.1% of replacements in 2025
Up from 3.4% in 2024 and 5% in 2023
This marks a meaningful shift after years of near-total reliance on commercial platforms.
“AI-assisted coding is changing the calculus of build vs. buy,” said martech analyst Scott Brinker. “It’s easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.”
For SEO teams, this could mean more organizations building:
Custom data pipelines.
Proprietary SERP tracking systems.
AI-driven analysis tools tailored to their specific needs.
Other martech categories show even greater stability
While SEO tools led in total replacements, the broader martech landscape is becoming more stable.
Several major categories saw declining replacement rates in 2025, including:
CRM platforms (down more than 12% year over year)
Marketing automation platforms
Email distribution tools
Content management systems
This suggests that many organizations are settling into core systems while selectively updating areas — like SEO — that are changing faster.
Methodology
Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.
A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.