Reading view

Why Google’s Performance Max advice often fails new advertisers

When Google reps push Performance Max before your account is ready

One of the biggest reasons new advertisers end up in underperforming Performance Max campaigns is simple: they followed Google’s advice.

Google Ads reps are often well-meaning and, in many cases, genuinely helpful at a surface level. 

But it’s critical for advertisers – especially new ones – to understand who those reps work for, how they’re incentivized, and what their recommendations are actually optimized for.

Before defaulting to Google’s newest recommendation, it’s worth taking a step back to understand why the “shiny new toy” isn’t always the right move – and how advertisers can better advocate for strategies that serve their business, not just the platform.

Google reps are not strategic consultants

Google Ads reps play a specific role, and that role is frequently misunderstood.

They do not:

  • Manage your account long term.
  • Know your margins, cash flow, or true break-even ROAS.
  • Understand your internal goals, inventory constraints, or seasonality.
  • Get penalized when your ads lose money.

Their responsibility is not to build a sustainable acquisition strategy for your business. Instead, their primary objectives are to:

  • Increase platform and feature adoption.
  • Drive spend into newer campaign types.
  • Push automation, broad targeting, and machine learning.

That distinction matters.

Performance Max is Google’s flagship campaign type. It uses more inventory, more placements, and more automation across the entire Google ecosystem. 

From Google’s perspective, it’s efficient, scalable, and profitable. From a new advertiser’s perspective, however, it’s often premature and misaligned with early-stage needs.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

Performance Max benefits Google before it benefits you

Performance Max often benefits Google before it benefits the advertiser. 

Because it automatically spends across Search, Shopping, Display, YouTube, Discover, and Gmail, Google is given near-total discretion over where your budget is allocated. In exchange, advertisers receive limited visibility into what’s actually driving results.

For Google, this model is ideal. It monetizes more surfaces, accelerates adoption of automated bidding and targeting, and increases overall ad spend across the board. For advertisers – particularly those with new or low-data accounts – the reality looks different.

New accounts often end up paying for upper-funnel impressions before meaningful conversion data is available. 

Budgets are diluted across lower-intent placements, CPCs can spike unpredictably, and when performance declines, there’s very little insight into what to fix or optimize. 

You’re often left guessing whether the issue is creative, targeting, bidding, tracking, or placement.

This misalignment is exactly why Google reps so often recommend Performance Max even when an account lacks the data foundation required for it to succeed.

‘Best practice’ doesn’t mean best strategy for your business

What Google defines as “best practice” does not automatically translate into the best strategy for your business.

Google reps operate from generalized, platform-wide guidance rather than a custom account strategy. 

Their recommendations are typically driven by aggregated averages, internal adoption goals, and the products Google is actively promoting next – not by the unique realities of your business.

They are not built around your specific business model, your customer acquisition cost tolerance, your testing and learning roadmap, or your need for early clarity and control. 

As a result, strategies that may work well at scale for mature, data-rich accounts often fail to deliver the same results for new or growing advertisers.

What’s optimal for Google at scale isn’t always optimal for an advertiser who is still validating demand, pricing, and profitability.

Dig deeper: Google Ads best practices: The good, the bad and the balancing act

Smart advertisers earn automation – they don’t start with it

Smart advertisers understand that automation is something you earn, not something you start with.

Even today, Google Shopping Ads remain one of the most effective tools for new ad accounts because they are controlled, intent-driven, and rooted in real purchase behavior.

Shopping campaigns rely far less on historical conversion volume and far more on product feed relevance, pricing, and search intent.

That makes them uniquely well-suited for advertisers who are still learning what works, what converts, and what deserves more budget.

To understand how this difference plays out in practice, consider what happened to a small chocolatier that came to me after implementing Performance Max based on guidance from their dedicated Google Ads rep.

Get the newsletter search marketers rely on.


A real-world example: When Performance Max goes wrong

The challenge was straightforward: The retailer’s Google Ads account was new, and Performance Max was positioned as the golden ticket to quickly building nationwide demand.

The result was disastrous.

  • Over $3,000 was spent with a return of just one purchase.
  • Traffic to the website and YouTube channel remained low despite the spend.
  • CPCs climbed as high as $50 per click.
  • ROAS was effectively nonexistent. 

To make matters worse, conversion tracking had not been set up correctly, causing Google to report inflated and inaccurate sales numbers that didn’t align with Shopify at all.

Understandably, the retailer lost confidence – not just in Performance Max, but in paid advertising as a whole. Before walking away entirely, they reached out to me.

Recognizing that this was a new account with no reliable data, I immediately reverse-engineered the setup into a standard Google Shopping campaign. 

We properly connected Google Ads and Google Merchant Center to Shopify to ensure clean, accurate tracking.

From there, the campaign was segmented by product groups, allowing for intentional bidding and clearer performance signals.

Within two weeks, real sales started coming through.

By the end of the month, the brand had acquired 56 new customers at a $53 cost per lead, with an average order value ranging from $115 to $200. 

More importantly, the account now had clean data, clear winners, and a foundation that could actually support automation in the future.

Dig deeper: The truth about Google Ads recommendations (and auto-apply)

Why Shopping ads still work – and still matter

By starting with Shopping campaigns, advertisers can validate products, pricing, and conversion tracking while building clean, reliable data at the product and SKU level.

This early-stage performance proves demand, highlights top-performing items, and trains Google’s algorithm with meaningful purchase behavior.

Shopping Ads also offer a higher level of control and transparency than Performance Max. 

Advertisers can segment by product category, brand, margin, or performance tier, apply negative keywords, and intentionally allocate budget to what’s actually profitable. 

When something underperforms, it’s clear why – and when something works, it’s easy to scale.

This level of insight is invaluable early on, when every dollar spent should be contributing to learning, not just impressions.

The case for a hybrid approach

Standard Shopping consistently outperforms Performance Max for accounts that require granular control over product groups and bidding – especially when margins vary significantly across SKUs and precise budget allocation matters. 

It allows advertisers to double down on proven winners with exact targeting, intentional bids, and full visibility into performance.

That said, once a Shopping campaign has been running long enough to establish clear performance patterns, a hybrid approach can be extremely effective.

Performance Max can play a complementary role for discovery, particularly for advertisers managing broad product catalogs or limited optimization bandwidth. 

Used selectively, it can help test new products, reach new audiences, and expand beyond existing demand – without sacrificing the stability of core revenue drivers.

While Performance Max reduces transparency and control, pairing it with Standard Shopping for established performers creates a balanced strategy that prioritizes profitability while still allowing room for scalable growth.

Dig deeper: 7 ways to segment Performance Max and Shopping campaigns

Control first, scale second

Google reps are trained to recommend what benefits the platform first, not what’s safest or most efficient for a new advertiser learning their market. 

While Performance Max can be powerful, it only works well when it’s fueled by strong, reliable data – something most new accounts simply don’t have yet.

Advertisers who prioritize predictable performance, cleaner insights, and sustainable growth are better served by starting with Google Shopping Ads, where intent is high, control is stronger, and optimization is transparent. 

By using Shopping campaigns to validate products, understand true acquisition costs, and build confidence in what actually converts, businesses create a solid foundation for automation.

From there, Performance Max can be layered in deliberately and profitably – used as a tool to scale proven success rather than a shortcut that drains budget. 

That approach isn’t anti-Google. It’s disciplined, strategic advertising designed to protect spend and drive long-term results.

Microsoft launches Publisher Content Marketplace for AI licensing

The future of remarketing? Microsoft bets on impressions, not clicks

Microsoft Advertising today launched the Publisher Content Marketplace (PCM), a system that lets publishers license premium content to AI products and get paid based on how that content is used.

How it works. PCM creates a direct value exchange. Publishers set licensing and usage terms, while AI builders discover and license content for specific grounding scenarios. The marketplace also includes usage-based reporting, giving publishers visibility into how their content performs and where it creates the most value.

Designed to scale. PCM is designed to avoid one-off licensing deals between individual publishers and AI providers. Participation is voluntary, ownership remains with publishers, and editorial independence stays intact. The marketplace supports everyone from global publishers to smaller, specialized outlets.

Why we care. As AI systems shift from answering questions to making decisions, content quality matters more than ever. As agents increasingly guide purchases, finance, and healthcare choices, ads and sponsored messages will sit alongside — or draw from — premium content rather than generic web signals. That raises the bar for credibility and points to a future where brand alignment with trusted publishers and AI ecosystems directly impacts performance.

Early traction. Microsoft Advertising co-designed PCM with major U.S. publishers, including Business Insider, Condé Nast, Hearst, The Associated Press, USA TODAY, and Vox Media. Early pilots grounded Microsoft Copilot responses in licensed content, with Yahoo among the first demand partners now onboarding.

What’s next. Microsoft plans to expand the pilot to more publishers and AI builders that share a core belief: as the AI web evolves, high-quality content should be respected, governed, and paid for.

The big picture. In an agentic web, AI tools increasingly summarize, reason, and recommend through conversation. Whether the topic is medical safety, financial eligibility, or a major purchase, outcomes depend on access to trusted, authoritative sources — many of which sit behind paywalls or in proprietary archives.

The tension. The traditional web bargain was simple: publishers shared content, and platforms sent traffic back. That model breaks down when AI delivers answers directly, cutting clicks while still depending on premium content to perform well.

Bottom line. If AI is going to make better decisions, it needs better inputs — and PCM is Microsoft’s bet that a sustainable content economy can power the next phase of the agentic web.

Microsoft’s announcement. Building Toward a Sustainable Content Economy for the Agentic Web

Inspiring examples of responsible and realistic vibe coding for SEO

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

TypeDescriptionTools
AI-assisted coding AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work.GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe codingPlatforms that handle everything except the prompt/idea. AI does most of the work.ChatGPT, Replit, Gemini, Google AI Studio
No-code platformsPlatforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream.Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

LinkedIn: AI-powered search cut traffic by up to 60%

AEO playbook

AI-powered search gutted LinkedIn’s B2B awareness traffic. Across a subset of topics, non-brand organic visits fell by as much as 60% even while rankings stayed stable, the company said.

  • LinkedIn is moving past the old “search, click, website” model and adopting a new framework: “Be seen, be mentioned, be considered, be chosen.”

By the numbers. In a new article, LinkedIn said its B2B organic growth team started researching Google’s Search Generative Experience (SGE) in early 2024. By early 2025, when SGE evolved into AI Overviews, the impact became significant.

  • Non-brand, awareness-driven traffic declined by up to 60% across a subset of B2B topics.
  • Rankings stayed stable, but click-through rates fell (by an undisclosed amount).

Yes, but. LinkedIn’s “new learnings” are more like a rehash of established SEO/AEO best practices. Here’s what LinkedIn’s content-level guidance consists of:

  • Use strong headings and a clear information hierarchy.
  • Improve semantic structure and content accessibility.
  • Publish authoritative, fresh content written by experts.
  • Move fast, because early movers get an edge.

Why we care. These tactics should all sound familiar. These are technical SEO and content-quality fundamentals. LinkedIn’s article offers little new in terms of tactics. It’s just updated packaging for modern SEO/AEO and AI visibility.

Dig deeper. How to optimize for AI search: 12 proven LLM visibility tactics

Measurement is broken. LinkedIn said its big challenge is the “dark” funnel. It can’t quantify how visibility in LLM answers impacts the bottom line, especially when discovery happens without a click.

  • LinkedIn’s B2B marketing websites saw triple-digit growth in LLM-driven traffic and that it can track conversion from those visits.
    • Yes, but: Many websites are also seeing triple-digit (or more) growth in LLM-driven traffic. Because it’s an emerging channel. That said, this is still a tiny amount of overall traffic right now (1% or less for most sites).

What LinkedIn is doing. LinkedIn created an AI Search Taskforce spanning SEO, PR, editorial, product marketing, product, paid media, social, and brand. Key actions included:

  • Correcting misinformation that showed up in AI responses.
  • Publishing new owned content optimized for generative visibility.
  • Testing LinkedIn (social) content to validate its strength in AI discovery.

Is it working? LinkedIn said early tests produced a meaningful lift in visibility and citations, especially from owned content. At least one external datapoint (Semrush, Nov. 10, 2025) suggested that LinkedIn has a structural advantage in AI search:

  • Google AI Mode cited LinkedIn in roughly 15% of responses.
  • LinkedIn was the #2 most-cited domain in that dataset, behind YouTube.

Incomplete story. LinkedIn’s article is an interesting read, but it’s light on specifics. Missing details include:

  • The exact topic set behind the “up to 60%” decline.
  • Exactly how much click-through rates “softened.”
  • Sample size and timeframe.
  • How “industry-wide” comparisons were calculated.
  • What tests were run, what moved citation share, and by how much.

Bottom line. LinkedIn is right that visibility is the new currency. However, it hasn’t shown enough detail to prove its new playbook is meaningfully different from doing some SEO (yes, SEO) fundamentals.

LinkedIn’s article. How LinkedIn Marketing Is Adapting to AI-Led Discovery

Are we ready for the agentic web?

Are we ready for the agentic web?

Innovations are coming at marketers and consumers faster than before, raising the question: Are we actually ready for the agentic web?

To answer that question, it’s important to unpack a few supporting ones:

  • What’s the agentic web?
  • How can the agentic web be used?
  • What are the pros and cons of the agentic web?

It’s important to note that this article isn’t a mandate for AI skeptics to abandon the rational questions they have about the agentic web. 

Nor is it intended to place any judgment on how you, as a consumer or professional, engage with the agentic web.

LinkedIn poll on Copilot Checkout

With thoughts and feelings so divided on the agentic web, this article aims to provide clear insight into how to think about it in earnest, without the branding or marketing fluff.

Disclosure: I am a Microsoft employee and believe in the path Microsoft’s taking with the agentic web. However, this article will attempt to be as platform-agnostic as possible.

What’s the agentic web? 

The agentic web refers to sophisticated tools, or agents, trained on our preferences that act with our consent to accomplish time-consuming tasks.

In simple terms, when I use one-click checkout, I allow my saved payment information to be passed to the merchant’s accounts receivable systems. 

Neither the merchant nor I must write down all the details or be involved beyond consenting to send and receive payment.

For fun, I put this question to four different AI models, and the responses are telling: 

  • Copilot: “The agentic web is an emerging layer of the internet where AI agents can reason and take actions on behalf of people or organizations, turning expressed intent into outcomes while preserving user choice, transparency, and control.” 
  • Gemini: “The agentic web is an evolution of the internet where autonomous AI agents – rather than just human users – interact with websites, APIs, and each other to perform complex tasks and make decisions on our behalf. “
  • Perplexity: “The agentic web is an emerging phase of the internet where autonomous AI agents act as the primary interface, navigating, interpreting, and taking actions across websites and services on behalf of users with minimal human involvement.” 
  • Claude: “The agentic web is an evolution of the internet where AI agents autonomously navigate, interact with, and complete tasks across websites and services on behalf of users.” 

I begin with how different models answer the question because it’s important to understand that each one is trained on different information, and outcomes will inevitably vary.

It’s worth noting that with the same prompt, defining the agentic web in one sentence, three out of four models focus on diminishing the human role in navigating the web, while one makes a point to emphasize the significance of human involvement, preserving user choice, transparency, and control.

Two out of four refer to the agentic web as a layer or phase rather than an outright evolution of the web. 

This is likely where the sentiment divide on the agentic web stems from.

Some see it as a consent-driven layer designed to make life easier, while others see it as a behemoth that consumes content, critical thinking, and choice.

It’s noteworthy that one model, Gemini, calls out APIs as a means of communication in the agentic web. APIs are essentially libraries of information that can be referenced, or called, based on the task you are attempting to accomplish. 

This matters because APIs will become increasingly relevant in the agentic web, as saved preferences must be organized in ways that are easily understood and acted upon.

Defining the agentic web requires spending some time digging into two important protocols – ACP and UCP.

Dig deeper: AI agents in SEO: What you need to know

Agentic Commerce Protocol: Optimized for action inside conversational AI 

The Agentic Commerce Protocol, or ACP, is designed around a specific moment: when a user has already expressed intent and wants the AI to act.

The core idea behind ACP is simple. If a user tells an AI assistant to buy something, the assistant should be able to do so safely, transparently, and without forcing the user to leave the conversation to complete the transaction.

ACP enables this by standardizing how an AI agent can:

  • Access merchant product data.
  • Confirm availability and price.
  • Initiate checkout using delegated, revocable payment authorization.

The experience is intentionally streamlined. The user stays in the conversation. The AI handles the mechanics. The merchant still fulfills the order.

This approach is tightly aligned with conversational AI platforms, particularly environments where users are already asking questions, refining preferences, and making decisions in real time. It prioritizes speed, clarity, and minimal friction.

Universal Commerce Protocol: Built for discovery, comparison, and lifecycle commerce 

The Universal Commerce Protocol, or UCP, takes a broader view of agentic commerce.

Rather than focusing solely on checkout, UCP is designed to support the entire shopping journey on the agentic web, from discovery through post-purchase interactions. It provides a common language that allows AI agents to interact with commerce systems across different platforms, surfaces, and payment providers. 

That includes: 

  • Product discovery and comparison.
  • Cart creation and updates.
  • Checkout and payment handling.
  • Order tracking and support workflows.

UCP is designed with scale and interoperability in mind. It assumes users will encounter agentic shopping experiences in many places, not just within a single assistant, and that merchants will want to participate without locking themselves into a single AI platform.

It’s tempting to frame ACP and UCP as competing solutions. In practice, they address different moments of the same user journey.

ACP is typically strongest when intent is explicit and the user wants something done now. UCP is generally strongest when intent is still forming and discovery, comparison, and context matter.

So what’s the agentic web? Is it an army of autonomous bots acting on past preferences to shape future needs? Is it the web as we know it, with fewer steps driven by consent-based signals? Or is it something else entirely?

The frustrating answer is that the agentic web is still being defined by human behavior, so there’s no clear answer yet. However, we have the power to determine what form the agentic web takes. To better understand how to participate, we now move to how the agentic web can be used, along with the pros and cons.

Dig deeper: The Great Decoupling of search and the birth of the agentic web

How can the agentic web be used? 

Working from the common theme across all definitions, autonomous action, we can move to applications.

Elmer Boutin has written a thoughtful technical view on how schema will impact agentic web compatibility. Benjamin Wenner has explored how PPC management might evolve in a fully agentic web. Both are worth reading.

Here, I want to focus on consumer-facing applications of the agentic web and how to think about them in relation to the tasks you already perform today.

Here are five applications of the agentic web that are live today or in active development.

1. Intent-driven commerce  

A user states a goal, such as “Find me the best running shoes under $150,” and an agent handles discovery, comparison, and checkout without requiring the user to manually browse multiple sites. 

How it works 

Rather than returning a list of links, the agent interprets user intent, including budget, category, and preferences. 

It pulls structured product information from participating merchants, applies reasoning logic to compare options, and moves toward checkout only after explicit user confirmation. 

The agent operates on approved product data and defined rules, with clear handoffs that keep the user in control. 

Implications for consumers and professionals 

Reducing decision fatigue without removing choice is a clear benefit for consumers. For brands, this turns discovery into high-intent engagement rather than anonymous clicks with unclear attribution. 

Strategically, it shifts competition away from who shouts the loudest toward who provides the clearest and most trusted product signals to agents. These agents can act as trusted guides, offering consumers third-party verification that a merchant is as reliable as it claims to be.

2. Brand-owned AI assistants 

A brand deploys its own AI agent to answer questions, recommend products, and support customers using the brand’s data, tone, and business rules.

How it works 

The agent uses first-party information, such as product catalogs, policies, and FAQs. 

Guardrails define what it can say or do, preventing inferences that could lead to hallucinations. 

Responses are generated by retrieving and reasoning over approved context within the prompt.

Implications for consumers and professionals 

Customers get faster and more consistent responses. Brands retain voice, accountability, and ownership of the experience. 

Strategically, this allows companies to participate in the agentic web without ceding their identity to a platform or intermediary. It also enables participation in global commerce without relying on native speakers to verify language.

3. Autonomous task completion 

Users delegate outcomes rather than steps, such as “Prepare a weekly performance summary” or “Reorder inventory when stock is low.” 

How it works 

The agent breaks the goal into subtasks, determines which systems or tools are needed, and executes actions sequentially. It pauses when permissions or human approvals are required. 

These can be provided in bulk upfront or step by step. How this works ultimately depends on how the agent is built. 

Implications for consumers and marketers 

We’re used to treating AI like interns, relying on micromanaged task lists and detailed prompts. As agents become more sophisticated, it becomes possible to treat them more like senior employees, oriented around outcomes and process improvement. 

That makes it reasonable to ask an agent to identify action items in email or send templates in your voice when active engagement isn’t required. Human choice comes down to how much you delegate to agents versus how much you ask them to assist.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Get the newsletter search marketers rely on.


4. Agent-to-agent coordination and negotiation 

Agents communicate with other agents on behalf of people or organizations, such as a buyer agent comparing offers with multiple seller agents. 

How it works 

Agents exchange structured information, including pricing, availability, and constraints. 

They apply predefined rules, such as budgets or policies, and surface recommended outcomes for human approval. 

Implications for consumers and marketers 

Consumers may see faster and more transparent comparisons without needing to manually negotiate or cross-check options. 

For professionals, this introduces new efficiencies in areas like procurement, media buying, or logistics, where structured negotiation can occur at scale while humans retain oversight.

5. Continuous optimization over time 

Agents don’t just act once. They improve as they observe outcomes.

How it works 

After each action, the agent evaluates what happened, such as engagement, conversion, or satisfaction. It updates its internal weighting and applies those learnings to future decisions.

Why people should care 

Consumers experience increasingly relevant interactions over time without repeatedly restating preferences. 

Professionals gain systems that improve continuously, shifting optimization from one-off efforts to long-term, adaptive performance. 

What are the pros and cons of the agentic web? 

Life is a series of choices, and leaning into or away from the agentic web comes with clear pros and cons.

Pros of leaning into the agentic web 

The strongest argument for leaning into the agentic web is behavioral. People have already been trained to prioritize convenience over process. 

Saved payment methods, password managers, autofill, and one-click checkout normalized the idea that software can complete tasks on your behalf once trust is established.

Agentic experiences follow the same trajectory. Rather than requiring users to manually navigate systems, they interpret intent and reduce the number of steps needed to reach an outcome. 

Cons of leaning into the agentic web 

Many brands will need to rethink how their content, data, and experiences are structured so they can be interpreted by automated systems and humans. What works for visual scanning or brand storytelling doesn’t always map cleanly to machine-readable signals.

There’s also a legitimate risk of overoptimization. Designing primarily for AI ingestion can unintentionally degrade human usability or accessibility if not handled carefully. 

Dig deeper: The enterprise blueprint for winning visibility in AI search

Pros of leaning away from the agentic web 

Choosing to lean away from the agentic web can offer clarity of stance. There’s a visible segment of users skeptical of AI-mediated experiences, whether due to privacy concerns, automation fatigue, or a loss of human control. 

Aligning with that perspective can strengthen trust with audiences who value deliberate, hands-on interaction.

Cons of leaning away from the agentic web 

If agentic interfaces become a primary way people discover information, compare options, or complete tasks, opting out entirely may limit visibility or participation. 

The longer an organization waits to adapt, the more expensive and disruptive that transition can become.

What’s notable across the ecosystem is that agentic systems are increasingly designed to sit on top of existing infrastructure rather than replace it outright. 

Avoiding engagement with these patterns may not be sustainable over time. If interaction norms shift and systems aren’t prepared, the combination of technical debt and lost opportunity may be harder to overcome later.

Where the agentic web stands today

The agentic web is still taking form, shaped largely by how people choose to use it. Some organizations are already applying agentic systems to reduce friction and improve outcomes. Others are waiting for stronger trust signals and clearer consent models.

Either approach is valid. What matters is understanding how agentic systems work, where they add value, and how emerging protocols are shaping participation. That understanding is the foundation for deciding when, where, and how to engage with the agentic web.

7 digital PR secrets behind strong SEO performance

7 digital PR secrets behind strong SEO performance

Digital PR is about to matter more than ever. Not because it’s fashionable, or because agencies have rebranded link building with a shinier label, but because the mechanics of search and discovery are changing. 

Brand mentions, earned media, and the wider PR ecosystem are now shaping how both search engines and large language models understand brands. That shift has serious implications for how SEO professionals should think about visibility, authority, and revenue.

At the same time, informational search traffic is shrinking. Fewer people are clicking through long blog posts written to target top-of-funnel keywords. 

The commercial value in search is consolidating around high-intent queries and the pages that serve them: product pages, category pages, and service pages. Digital PR sits right at the intersection of these changes.

What follows are seven practical, experience-led secrets that explain how digital PR actually works when it’s done well, and why it’s becoming one of the most important tools in SEOs’ toolkit.

Secret 1: Digital PR can be a direct sales activation channel

Digital PR is usually described as a link tactic, a brand play or, more recently, as a way to influence generative search and AI outputs.

All of that’s true. What’s often overlooked is that digital PR can also drive revenue directly.

When a brand appears in a relevant media publication, it’s effectively placing itself in front of buyers while they are already consuming related information.

This is not passive awareness. It’s targeted exposure during a moment of consideration.

Platforms like Google are exceptionally good at understanding user intent, interests and recency. Anyone who has looked at their Discover feed after researching a product category has seen this in action. 

Digital PR taps into the same behavioral reality. You are not broadcasting randomly. You are appearing where buyers already are.

Two things tend to happen when this is executed well.

  • If your site already ranks for a range of relevant queries, your brand gains additional recognition in nontransactional contexts. Readers see your name attached to a credible story or insight. That familiarity matters.
  • More importantly, that exposure drives brand search and direct clicks. Some readers click straight through from the article. Others search for your brand shortly after. In both cases, they enter your marketing funnel with a level of trust that generic search traffic rarely has.

This effect is driven by basic behavioral principles such as recency and familiarity. While it’s difficult to attribute cleanly in analytics, the commercial impact is very real. 

We see this most clearly in direct-to-consumer, finance, and health markets, where purchase cycles are active and intent is high.

Digital PR is not just about supporting sales. In the right conditions, it’s part of the sales engine.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Secret 2: The mere exposure effect is one of digital PR’s biggest advantages

One of the most consistent patterns in successful digital PR campaigns is repetition.

When a brand appears again and again in relevant media coverage, tied to the same themes, categories, or areas of expertise, it builds familiarity. 

That familiarity turns into trust, and trust turns into preference. This is known as the mere exposure effect, and it’s fundamental to how brands grow.

In practice, this often happens through syndicated coverage. A strong story picked up by regional or vertical publications can lead to dozens of mentions across different outlets. 

Historically, many SEOs undervalued this type of coverage because the links were not always unique or powerful on their own.

That misses the point.

What this repetition creates is a dense web of co-occurrences. Your brand name repeatedly appears alongside specific topics, products, or problems. This influences how people perceive you, but it also influences how machines understand you.

For search engines and large language models alike, frequency and consistency of association matter. 

An always-on digital PR approach, rather than sporadic big hits, is one of the fastest ways to increase both human and algorithmic familiarity with a brand.

Secret 3: Big campaigns come with big risk, so diversification matters

Large, creative digital PR campaigns are attractive. They are impressive, they generate internal excitement, and they often win industry praise. The problem is that they also concentrate risk.

A single large campaign can succeed spectacularly, or it can fail quietly. From an SEO perspective, many widely celebrated campaigns underperform because they do not generate the links or mentions that actually move rankings.

This happens for a simple reason. What marketers like is not always what journalists need.

Journalists are under pressure to publish quickly, attract attention, and stay relevant to their audience. 

If a campaign is clever but difficult to translate into a story, it will struggle. If all your budget’s tied up in one idea, you have no fallback.

A diversified digital PR strategy spreads investment across multiple smaller campaigns, reactive opportunities, and steady background activity. 

This increases the likelihood of consistent coverage and reduces dependence on any single idea working perfectly.

In digital PR, reliability often beats brilliance.

Dig deeper: How to build search visibility before demand exists

Get the newsletter search marketers rely on.


Secret 4: The journalist’s the customer

One of the most common mistakes in digital PR is forgetting who the gatekeeper is.

From a brand’s perspective, the goal might be links, mentions, or authority. 

From a journalist’s perspective, the goal is to write a story that interests readers and performs well. These goals overlap, but they are not the same.

The journalist decides whether your pitch lives or dies. In that sense, they are the customer.

Effective digital PR starts by understanding what makes a journalist’s job easier. 

That means providing clear angles, credible data, timely insights, and fast responses. Think about relevance before thinking about links.

When you help journalists do their job well, they reward you with exposure. 

That exposure carries weight in search engines and in the training data that informs AI systems. The exchange is simple: value for value.

Treat journalists as partners, not as distribution channels.

Secret 5: Product and category page links are where SEO value is created

Not all links are equal.

From an SEO standpoint, links to product, category, and core service pages are often far more valuable than links to blog content. Unfortunately, they are also the hardest links to acquire through traditional outreach.

This is where digital PR excels.

Because PR coverage is contextual and editorial, it allows links to be placed naturally within discussions of products, services, or markets. When done correctly, this directs authority to the pages that actually generate revenue.

As informational content becomes less central to organic traffic growth, this matters even more.

Ranking improvements on high-intent pages can have a disproportionate commercial impact.

A relatively small number of high-quality, relevant links can outperform a much larger volume of generic links pointed at top-of-funnel content.

Digital PR should be planned with these target pages in mind from the outset.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Secret 6: Entity lifting is now a core outcome of digital PR

Search engines have long made it clear that context matters. The text surrounding a link, and the way a brand is described, help define what that brand represents.

This has become even more important with the rise of large language models. These systems process information in chunks, extracting meaning from surrounding text rather than relying solely on links.

When your brand is mentioned repeatedly in connection with specific topics, products, or expertise, it strengthens your position as an entity in that space. This is what’s often referred to as entity lifting.

The effect goes beyond individual pages. Brands see ranking improvements for terms and categories that were not directly targeted, simply because their overall authority has increased. 

At the same time, AI systems are more likely to reference and summarize brands that are consistently described as relevant sources.

Digital PR is one of the most scalable ways to build this kind of contextual understanding around a brand.

Secret 7: Authority comes from relevant sources and relevant sections

Former Google engineer Jun Wu discusses this in his book “The Beauty of Mathematics in Computer Science,” explaining that authority emerges from being recognized as a source within specific informational hubs. 

In practical terms, this means that where you are mentioned matters as much as how big the site is.

A link or mention from a highly relevant section of a large publication can be more valuable than a generic mention on the homepage. For example, a targeted subfolder on a major media site can carry strong authority, even if the domain as a whole covers many subjects.

Effective digital PR focuses on two things: 

  • Publications that are closely aligned with your industry and sections.
  • Subfolders that are tightly connected to the topic you want to be known for.

This is how authority is built in a way that search engines and AI systems both recognize.

Dig deeper: The new SEO imperative: Building your brand

Where digital PR now fits in SEO

Digital PR is no longer a supporting act to SEO. It’s becoming central to how brands are discovered, understood, and trusted.

As informational traffic declines and high-intent competition intensifies, the brands that win will be those that combine relevance, repetition, and authority across earned media. 

Digital PR, done properly, delivers all three.

Google: 75% of crawling issues come from two common URL mistakes

Google discussed its 2025 year-end report on crawling and indexing challenges for Google Search. The biggest issues were faceted navigation and action parameters, which accounted for about 75% of the problems, according to Google’s Gary Illyes. He shared this on the latest Search Off the Record podcast, published this morning.

What is the issue. Crawling issues can slow your site to a crawl, overload your server, and make your website unusable or inaccessible. If a bot gets stuck in an infinite crawling loop, recovery can take time.

  • “Once it discovers a set of URLs, it cannot make a decision about whether that URL space is good or not unless it crawled a large chunk of that URL space,” Illyes said. By then it is too late and your site has slowed to a halt.

The biggest crawling challenges. Based on the report, these are the main issues Google sees:

  • 50% come from faceted navigation. This is common on ecommerce sites, where endless filters for size, color, price, and similar options create near-infinite URL combinations.
  • 25% come from action parameters. These are URL parameters that trigger actions instead of meaningfully changing page content.
  • 10% come from irrelevant parameters. This includes session IDs, UTM tags, and other tracking parameters added to URLs.
  • 5% come from plugins or widgets. Some plugins and widgets generate problematic URLs that confuse crawlers.
  • 2% come from other “weird stuff.” This catch-all category includes issues such as double-encoded URLs and related edge cases.

Why we care. A clean URL structure without bot traps is essential to keep your server healthy, ensure fast page loads, and prevent search engines from getting confused about your canonical URLs.

The episode. Crawling Challenges: What the 2025 Year-End Report Tells Us.

💾

Gary Illyes says faceted navigation and action parameters dominate Google’s crawl waste, trapping bots in infinite URLs and straining servers.

Microsoft rolls out multi-turn search in Bing

Microsoft today rolled out multi-turn search globally in Bing. As you scroll down the search results page, a Copilot search box now dynamically appears at the bottom.

About multi-turn search. This type of search experience lets a user continue the conversation from the Bing search results page. Instead of starting over, the searcher types a follow-up question into the Copilot search box at the bottom of the results, allowing the search to build on the previous query. Here’s a screenshot of this feature:

Here’s a video of it in action:

What Microsoft said. Jordi Ribas, CVP, Head of Search at Microsoft, posted this news on X:

  • “After shipping in the US last year, multi-turn search in Bing is now available worldwide.
  • “Bing users don’t need to scroll up to do the next query, and the next turn will keep context when appropriate. We have seen gains in engagement and sessions per user in our online metrics, which reflect the positive user value of this approach.”

Why we care. Search engines like Google and Bing are pushing harder to move users into their AI experiences. Google is blending AI Overviews more deeply into AI Mode, even as many publishers object to how it handles their content. Bing has now followed suit, fully rolling out the Copilot search box at the bottom of search results after several months of testing.

💾

Bing's new multi-turn search is now global. As you scroll, a Copilot box appears at the bottom so follow-ups build on the last query.

Why most SEO failures are organizational, not technical

Why most SEO failures are organizational, not technical

I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.

The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.

No governance

The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission. 

When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.

I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.

Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.

Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.

Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.

Dig deeper: How to build an SEO-forward culture in enterprise organizations

Beware of drift

Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:

  • Navigation renamed by a new UX hire.
  • Wording adjusted by a new hire on the content team.
  • Templates adjusted for a marketing campaign.
  • Titles “cleaned up” by someone outside the SEO loop.

None of these changes look dangerous in isolation – if you know before they occur.

Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.

This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.

SEO loses power when it lives in the wrong place

I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.

Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.

I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.

When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.

  • Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
  • Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
  • Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.

Dig deeper: SEO stakeholders: Align teams and prove ROI like a pro

Positioning the SEO function

Without a seat at the right table, SEO becomes a cleanup function.

When one operational unit owns SEO, the work starts to reflect that unit’s incentives.

  • Under marketing, it becomes campaign-driven and short-term.
  • Under IT, it competes with infrastructure work and release stability.
  • Under product, it gets squeezed into roadmaps that prioritize features over discoverability.

The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.

In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.

Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.

Get the newsletter search marketers rely on.


Hiring mistakes

The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.

Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem

The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.

Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.

HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.

SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:

  • Migrations that wiped out templates.
  • Restructures that deleted category pages.
  • “Small” navigation changes that collapsed internal linking.

Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.

Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.

Governance needs to step in.

One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.

Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.

SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.

Dig deeper: The top 5 strategic SEO mistakes enterprises make (and how to avoid them)

When priorities pull in different directions

Every department in a large organization has legitimate goals.

  • Product wants momentum.
  • Engineering wants predictable releases.
  • Marketing wants campaign impact.
  • Legal wants risk reduction.

Each team can justify its decisions – and SEO still absorbs the cost.

I have seen simple structural improvements delayed because engineering was focused on a different initiative.

At one workplace, I was asked how much sales would increase if my changes were implemented.

I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.

Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.

Strong visibility governance can prevent that.

The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.

What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.

What improved outcomes was not a tool. It was governance: shared expectations and decision rights.

When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.

International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.

Ownership gaps

Many SEO problems trace back to ownership gaps that only become visible once performance declines.

  • Who owns the CMS templates?
  • Who defines metadata standards?
  • Who maintains structured data? Who approves content changes?

When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.

In contrast, the healthiest organizations I worked with shared one trait: clarity.

People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.

When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.

Dig deeper: How to win SEO allies and influence the brand guardians

Healthy environments for SEO to succeed

Across my career, the strongest results came from environments where SEO had:

  • Early involvement in upcoming changes.
  • Predictable collaboration with engineering.
  • Visibility into product goals.
  • Clear authority over content standards.
  • Stable templates and definitions.
  • A reliable escalation path when priorities conflicted.
  • Leaders who understood visibility as a long-term asset.

These organizations were not perfect. They were coherent.

People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.

What leaders can do now

If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:

  • Place SEO where it can see and influence decisions early.
  • Let SEO leaders – not HR – shape candidate shortlists.
  • Hire for judgment and influence, not presentation.
  • Create predictable access to product, engineering, content, analytics, and legal.
  • Stabilize page purpose and structural definitions.
  • Make the impact of changes visible before they ship.

These shifts do not require new software. They require decision clarity, discipline, and follow-through.

Visibility is an organizational outcome

SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.

The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.

When SEO performance falters, the most durable fixes usually start inside the organization.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Google Ads API update cracks open Performance Max by channel

Is your account ready for Google AI Max? A pre-test checklist

As part of the v23 Ads API launch, Performance Max campaigns can now be reported by channel, including Search, YouTube, Display, Discover, Gmail, Maps, and Search Partners. Previously, performance data was largely grouped into a single mixed category.

The change under the hood. Earlier API versions typically returned a MIXED value for the ad_network_type segment in Performance Max campaigns. With v23, those responses now break out into specific channel enums — a meaningful shift for reporting and optimization.

Why we care. Google Ads API v23 doesn’t just add features — it changes how advertisers understand Performance Max. The update introduces channel-level reporting, giving marketers long-requested visibility into where PMax ads actually run.

How advertisers can use it. Channel-level data is available at the campaign, asset group, and asset level, allowing teams to see how individual creatives perform across Google properties. When combined with v22 segments like ad_using_video and ad_using_product_data, advertisers can isolate results such as video performance on YouTube or Shopping ads on Search.

For developers. Upgrading to v23 will surface more detailed reporting than before. Reporting systems that relied on the legacy MIXED value will need to be updated to handle the new channel enums.

What to watch:

  • Channel data is only available for dates starting June 1, 2025.
  • Asset group–level channel reporting remains API-only and won’t appear in the Google Ads UI.

Bottom line. The latest Google Ads API release quietly delivers one of the biggest Performance Max updates yet — turning a black-box campaign type into something advertisers can finally analyze by channel.

How to build a modern Google Ads targeting strategy like a pro

Search marketing is still as powerful as ever. Google recently surpassed $100 billion in ad revenue in a single quarter, with more than half coming from search. But search alone can no longer deliver the same results most businesses expect.

As Google Ads Coach Jyll Saskin Gales showed at SMX Next, real performance now comes from going beyond traditional search and using it to strengthen a broader PPC strategy.

The challenge with traditional Search Marketing

As search marketers, we’re great at reaching people who are actively searching for what we sell. But we often miss people who fit our ideal audience and aren’t searching yet.

The real opportunity sits at the intersection of intent and audience fit.

Take the search [vacation packages]. That query could come from a family with young kids, a honeymooning couple, or a group of retirees. The keyword is the same, but each audience needs a different message and a different offer.

Understanding targeting capabilities in Google Ads

There are two main types of targeting:

  • Content targeting shows ads in specific places.
  • Audience targeting shows ads to specific types of people.

For example, targeting [flights to Paris] is content targeting. Targeting people who are “in-market for trips to Paris” is audience targeting. Google builds in-market audiences by analyzing behavior across multiple signals, including searches, browsing activity, and location.

The three types of content targeting

  • Keyword targeting: Reach people when they search on Google, including through dynamic ad groups and Performance Max.
  • Topic targeting: Show ads alongside content related to specific topics in display and video campaigns.
  • Placement targeting: Put ads on specific websites, apps, YouTube channels, or videos where your ideal customers already spend time.

The four types of audience targeting

  • Google’s data: Prebuilt segments include detailed demographics (such as parents of toddlers vs. teens), affinity segments (interests like vegetarianism), in-market segments (people actively researching purchases), and life events (graduating or retiring). Any advertiser can use these across most campaign types.
  • Your data: Target website visitors, app users, people who engaged with your Google content (YouTube viewers or search clickers), and customer lists through Customer Match. Note that remarketing is restricted for sensitive interest categories.
  • Custom segments: Turn content targeting into audience targeting by building segments based on what people search for, their interests, and the websites or apps they use. These go by different names depending on campaign type—“custom segments” in most campaigns and “custom search terms” in video campaigns.
  • Automated targeting: This includes optimized targeting (finding people similar to your converters), audience expansion in video campaigns, audience signals and search themes in Performance Max, and lookalike segments that model new users from your seed lists.

Building your targeting strategy

To build a modern targeting strategy, you need to answer two questions:

  • How can I sell my offer with Google Ads?
  • How can I reach a specific kind of person with Google Ads?

For example, to reach Google Ads practitioners for lead gen software, you could build custom segments that target people who use the Google Ads app, visit industry sites like searchengineland.com, or search for Google Ads–specific terms such as “Performance Max” or “Smart Bidding.”

You can also layer in content targeting, like YouTube placements on industry educator channels and topic targeting around search marketing.

Strategies for sensitive interest categories

If you work in a restricted category such as legal or healthcare and can’t use custom segments or remarketing, use non-linear targeting. Ignore the offer and focus on the audience. Choose any Google data audience with potential overlap, even if it’s imperfect, and let your creative do the filtering.

Use industry-specific jargon, abbreviations, and imagery that only your target audience will recognize and value. Everyone else will scroll past.

Remember: High CPCs aren’t the enemy

Low-quality traffic is the real problem. You’re better off paying $10 per click with a 10% conversion rate than $1 per click with a 0.02% conversion rate.

When evaluating targeting strategies, focus on conversion rate and cost per acquisition, not just cost per click.

Search alone can’t deliver the results you’re used to

By expanding beyond traditional search keywords and using content and audience targeting, you can reach the right people and keep driving strong results.

Watch: How to build a modern targeting strategy like a pro + Live Q&A

💾

Learn a practical PPC framework that predicts intent, reaches beyond search, and connects the right audiences to the right content.

OpenAI quietly lays groundwork for ads in ChatGPT

5 SEO use cases for the ChatGPT code interpreter plugin

People inspecting ChatGPT responses are spotting references to ads in the page source. One line reads: “InReply to user query using the following additional context of ads shown to the user.” The reference appears even when no ad is actually displayed.

Driving the news. Digital Marketer Glenn Gabe first flagged the issue on X after noticing the ad-related language in ChatGPT’s source code. Others have since replicated it while testing commercial queries like auto insurance.

Why we care. Ads in ChatGPT have been talked about for weeks. This piece of code spotted signals that ChatGPT ads are moving from concept to near-launch, creating a new, high-intent advertising channel. The presence of ad logic in the system suggests targeting and eligibility are already being tested, favoring early advertisers.

With limited inventory and ads likely woven into conversational responses rather than shown as banners, this could become premium, high-impact real estate that directly competes with organic answers.

Between the lines. The ads aren’t visible, but the logic appears to be live. That suggests OpenAI may already be testing ad eligibility, suppression rules for paid tiers, or internal triggers ahead of a broader rollout.

Context. OpenAI confirmed in January that ads are coming to ChatGPT for some users. The company said ads would be sold on an impression basis, and early indications suggest they won’t be cheap.

Bottom line. ChatGPT may not be showing ads yet — but the infrastructure is already in place.

Dig deeper. Glenn Gabe spots code that shows ChatGPT ads is imminent.

Human experience optimization: Why experience now shapes search visibility

Human experience optimization: Why experience now shapes search visibility

SEO has historically been an exercise in reverse-engineering algorithms. Keywords, links, technical compliance, repeat.

But that model is being reimagined. 

Today, visibility is earned through trust, usefulness, and experience, not just relevance signals or crawlability.

Search engines no longer evaluate pages in isolation. They observe how people interact with brands over time.

That shift has given rise to human experience optimization (HXO): the practice of optimizing how humans experience, trust, and act on your brand across search, content, product, and conversion touchpoints.

Rather than replacing SEO, HXO expands its scope to reflect how search now evaluates performance. Experience, engagement, and credibility have become difficult to separate from visibility itself.

Below, we’ll look at how HXO shows up in modern search, why it matters now, and how it reshapes the boundaries between SEO, UX, and conversion.

Why HXO matters now

Modern search engines reward outcomes, not tactics.

Ranking signals increasingly reflect what happens after the click, aligning with Google’s emphasis on user satisfaction over isolated page signals.

In practice, that means signals tied to questions like:

  • Do users engage or bounce?
  • Do they return?
  • Do they recognize the brand later?
  • Do they trust the information enough to act on it?

Visibility today is influenced by three overlapping forces:

  • User behavior signals: Engagement, satisfaction, repeat visits, and downstream actions all indicate whether content actually delivers value.
  • Brand signals: Recognition, authority, and trust – built over time, across channels – shape how search engines interpret credibility.
  • Content authenticity and experience: Pages that feel generic, automated, or disconnected from real expertise increasingly struggle to perform.

HXO emerges as a response to two compounding pressures:

  • AI-generated content saturation, which has made “good enough” content abundant and undifferentiated.
  • Declining marginal returns from traditional SEO tactics, especially when they aren’t supported by strong experience and brand coherence.

In short, optimization that ignores human experience is no longer competitive.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

The convergence: SEO, UX, and CRO are no longer separate

For a long time, SEO, UX, and CRO operated as separate disciplines:

  • SEO focused on traffic acquisition.
  • UX focused on usability and design.
  • CRO focused on conversion efficiency.

But that separation no longer works. 

Traffic alone doesn’t mean much if users don’t engage. Engagement without a clear path to action limits impact. And conversion is difficult to scale when trust hasn’t been established.

HXO now acts as the unifying layer:

  • SEO still determines how people arrive.
  • UX shapes whether they understand what they have found.
  • CRO influences whether that understanding turns into action.

That convergence is increasingly visible in how search-driven experiences perform. 

Page experience affects both visibility and post-click behavior. Search intent informs page structure and UX decisions alongside keyword targeting. Content clarity and credibility influence whether users engage once or return through search again.

In this environment, optimization is less about securing a single click. It’s about supporting attention and trust over time.

E-E-A-T is a business system, not content guidelines

One of the most persistent misconceptions in search is that E-E-A-T – or, experience, expertise, authoritativeness, and trustworthiness – can just be “added” to content.

Add an author bio. Add citations. Add credentials.

Those elements do matter. They help provide context and communicate expertise. But treating E-E-A-T primarily as a set of small, on-page additions doesn’t fully capture how search systems evaluate expertise and trust.

In practice, E-E-A-T isn’t just about how one page is formatted. It’s a broader, more holistic view of how a business demonstrates credibility to users over time. That tends to be an output of: 

  • Real expertise embedded in products and services.
  • Transparent operations and clearly stated values.
  • A consistent brand voice with visible accountability.
  • Clear ownership over ideas, opinions, and outcomes.

Search engines aren’t evaluating content in isolation. They’re evaluating the context around it, too.

Per Google’s Search Quality Rater Guidelines, that includes: 

  • Who is responsible for creating the content and whether that responsibility is clearly disclosed.
  • The demonstrated experience and reputation of the creator or organization behind it.
  • Consistency in expertise and accuracy across related content on the site.
  • Evidence of ongoing trust, including transparency, content updates, and accountability for accuracy.

Viewed this way, E-E-A-T is reinforced through consistent systems and patterns, not isolated page-level changes.

First-hand experience signals are the new differentiator

Today’s search landscape is flooded with competent, well-structured content that meets a similar baseline of accuracy and readability. “Good” content is no longer a meaningful bar.

Because of that, first-hand experience is becoming an increasingly important content differentiator. That can look like:

  • Original data, testing, or research generated by the creator.
  • Lived experience paired with a clear point of view.
  • Named creators with reputational stakes in what they publish.
  • Insight that reflects direct involvement, not secondhand synthesis.

There’s a meaningful difference between:

  • Information aggregation (what anyone could compile).
  • Experience-based insight (what only operators, practitioners, and creators can provide).

For example, a guide to subscription pricing that summarizes common models may be factually sound. But a piece written by someone who’s priced, tested, and revised subscription tiers over time is more likely to surface tradeoffs, edge cases, and decision logic.

That’s something aggregation can’t replicate.

This is why we’re seeing creators and operators increasingly outperform faceless brands. Within the world of human experience optimization, the “human” part is key.

Dig deeper: 4 SEO tips to elevate the user experience

Get the newsletter search marketers rely on.


Helpful content is a brand problem, not an SEO problem

“Helpful content updates” are often discussed as if performance issues stem from technical gaps or tactical mistakes.

In practice, when content fails to be helpful, the underlying causes tend to sit elsewhere.

Common patterns include:

  • A brand that lacks clarity about what it stands for or who it serves.
  • A business that avoids taking clear positions or making decisions visible.
  • An experience that feels fragmented across pages, channels, or touchpoints.

In contrast, content that users consistently find helpful usually reflects deeper alignment. It tends to emerge from:

  • A clear understanding of audience needs and decision contexts.
  • Real-world problem solving informed by actual experience.
  • Consistent intent across messaging, products, and interactions.

SEO can improve discoverability and structure, but it can’t compensate for unclear positioning or disconnected experiences. When helpfulness is missing, the issue is rarely confined to the page itself.

That view lines up with how Google described its helpful content system, which looks at broader site-level patterns and long-term value rather than isolated pages or tactics.

Closing these gaps requires a broader view of how people experience, trust, and engage with a brand beyond any single page. HXO provides a framework for that shift.

How to start practicing human experience optimization

Human experience optimization doesn’t begin with keywords. It begins with people and the situations that lead them to search in the first place.

In practice, adopting HXO usually involves a few shifts in focus:

1. Move from keyword strategy to audience strategy

Keyword research remains useful, but it’s rarely sufficient on its own. 

Teams need a clearer understanding of motivations, anxieties, and decision contexts, not just what terms people type into a search bar.

2. Audit experience, not just pages

Page-level audits often miss the broader experience users actually encounter. A more useful lens looks at:

  • Trust signals and credibility cues.
  • Clarity of message and intent.
  • Friction in user journeys.
  • Consistency across touchpoints and channels.

3. Align teams around experience outcomes

HXO tends to surface gaps between functions that operate independently. Addressing those gaps requires coordination across:

  • Marketing.
  • Product.
  • Content.
  • Design.

The goal isn’t alignment for its own sake, but shared responsibility for how users experience the brand.

4. Measure what actually matters

Traditional metrics still have a place, but they don’t tell the full story. Teams practicing HXO often expand measurement to include:

  • Engagement quality rather than raw volume.
  • Brand recall and recognition.
  • Return users over time.
  • Conversions driven by confidence and trust rather than pressure.

Optimize for humans, earn the algorithms

HXO isn’t a tactic to deploy or a framework to layer on. It reflects a longer-term advantage rooted in how consistently a brand shows up for users.

In modern search, the brands that perform most reliably tend to share a few traits:

  • They’re grounded in real experience.
  • They’re consistently useful.
  • They demonstrate expertise through action, not just explanation.

As a result, search visibility can’t be engineered through isolated optimizations. It’s shaped by the cumulative experiences people have with a brand before, during, and after a search interaction.

Ads in ChatGPT: Why behavior matters more than targeting

Ads in ChatGPT- Why behavior matters more than targeting

Ads are now being tested in ChatGPT in the U.S., appearing for some users across different account types. For the first time, advertising is entering an AI answer environment – and that changes the rules for marketers.

We’ve used AI as part of ad creation or planning for years across Google, LinkedIn, and paid social. But placing ads inside an AI system that people trust to help them think, decide, and act is fundamentally different. This is not just another channel to plug into an existing media plan.

The biggest question is not targeting. It’s psychology. If advertisers simply replicate what works in search or social, performance will disappoint, and trust may suffer.

To succeed, brands need to understand how and why people use ChatGPT in the first place and what that means for attention, relevance, and the customer journey.

ChatGPT is a task environment, not a feed

People open ChatGPT to do something. That might be:

  • Solving a specific problem.
  • Refining a shortlist.
  • Planning a trip.
  • Writing something.
  • Making sense of a complex decision. 

This is very different from feed-based platforms, where people expect to scroll, be interrupted, and discover content passively.

In task-based environments like ChatGPT, behavior changes:

  • Goal shielding: Attention narrows to completing the task, filtering out anything that does not help progress.
  • Interruption aversion: Unexpected distractions feel more irritating when someone is focused.
  • Tunnel focus: Users prioritize clarity, speed, and momentum over exploration.

This is why clicks are likely to be harder to earn than many advertisers expect. If an ad does not help the user move forward with what they are trying to achieve, it will feel irrelevant, even if it is topically related.

Add to this the fact that trust in AI environments is still forming, and the tolerance for poor or interruptive advertising becomes even lower.

Dig deeper: OpenAI moves on ChatGPT ads with impression-based launch

When there are no search volumes, behavior becomes the strategy

For years, search volume has shaped how we plan.

Keywords told us what people wanted, how often they wanted it, and how competitive demand was. That logic underpinned both SEO and paid media strategy.

ChatGPT changes that.

People are not searching for keywords. They are outsourcing thinking. They describe situations, ask layered questions, and seek outcomes rather than information alone.

There is no query data to optimize against. Instead, success depends on understanding:

  • What job the user is trying to get done.
  • Which part of the journey they are choosing to outsource to AI.
  • What kind of help they need in that moment.

This is where behavioral insight replaces keyword demand as the strategic foundation.

From keyword intent to behavior mode targeting

Rather than planning around queries, advertisers need to plan around behavior modes, the mindset a user is in when they turn to ChatGPT. 

A useful way to think about this is:

  • Explore mode: The user is shaping a perspective or seeking inspiration.
  • Ads that work here help people start, offering ideas, options, or reframing the problem.
  • Reduce mode: The user is simplifying and narrowing choices. Effective ads reduce effort by clarifying differences and highlighting relevant trade-offs.
  • Confirm mode: The user is looking for reassurance. This is where trust matters most: proof, reviews, guarantees, and credible signals.
  • Act mode: The user wants to complete the task. Ads that remove friction perform best, clear pricing, availability, delivery, and next steps.

These modes closely mirror the human drivers we already recognize in search behavior: shaping perspective, informing, reassuring, and simplifying.

The difference is that ChatGPT compresses these moments into a single interface.

Dig deeper: What AI means for paid media, user behavior, and brand visibility

Get the newsletter search marketers rely on.


In ChatGPT, relevance is functional, not topical

A key shift advertisers need to internalize is that relevance in ChatGPT is not about being related. It is about being useful.

An ad can be perfectly aligned to a category and still fail if it does not help the user complete their task.

In a task environment, anything that creates extra work or pulls attention away from the goal feels like friction. This means the creative rules change.

High-performing ads are likely to behave less like traditional advertising and more like:

  • Tools.
  • Templates.
  • Guides.
  • Checklists.
  • Shortcuts.
  • Decision aids.

They fit into the flow of what the user is doing.

Generic brand ads, pure awareness messaging, and content that feels like a detour are likely to underperform.

Dig deeper: Your ads are dying: How to spot and stop creative fatigue before it tanks performance

Helpful content becomes the bridge across channels

The same assets that make a strong ChatGPT ad – practical guides, frameworks, calculators, explainers, and reassurance-led content – also do much more than support paid performance. 

They build authority for SEO and generative optimization, earn coverage and credibility through digital PR, and reinforce brand trust across social and owned channels.

This is where silos start to break performance.

Paid media teams cannot create “helpful ads” in isolation if SEO teams are working on authority, PR teams are building trust signals, and brand teams are shaping voice independently. In AI-led discovery, these signals converge.

The most effective ads may borrow from:

  • Brand voice for clarity and consistency.
  • Trusted voice through reviews, experts, or third-party validation.
  • Amplified voice via media coverage and recognizable authority.

The line between advertising, content, and credibility becomes increasingly blurred.

Measurement needs a reset

Judging ChatGPT ads purely on click-through rate risks missing their real impact.

In many cases, these ads may influence decisions without triggering an immediate click. They may help a brand enter a shortlist, feel safer, or be remembered when the user returns later through another channel.

More meaningful indicators may include:

  • Shortlist inclusion.
  • Brand recall.
  • Assisted conversions.
  • Branded search uplift.
  • Direct traffic uplift.
  • Downstream conversion lift.

This reinforces the need for teams to work more closely together. If performance is distributed across the journey, measurement and accountability must be too.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

The brands that win will understand behavior best

This is not simply a new ad format. We are looking at a behavioral shift.

The brands most likely to succeed will not be the ones that move fastest or spend the most. They will be the ones who understand:

  • What people actually use ChatGPT for.
  • Which moments of the journey are being outsourced to AI.
  • How to support those moments without breaking trust.

A practical starting point is returning to jobs-to-be-done thinking. Map the actions that happen before someone buys, inquires, or commits and identify where AI reduces effort, uncertainty, or complexity.

From there, the question becomes more powerful than “how do we advertise here?”:

How can we be genuinely helpful at the moment it matters?

That mindset will not only shape performance in ChatGPT, but across the wider future of AI-led discovery. And in that world, behavioral intent will matter far more than keywords ever did.

Advanced ways to use competitive research in SEO and AEO

Advanced ways to use competitive research in SEO and AEO

Competitive research is a gold mine of insights in the world of organic discovery. Clients always love seeing insights about how they stack up against their rivals, and the insights are very easily translated into a multi-dimensional roadmap for getting traction on essential topics.

If you haven’t already done this, 2026 needs to be the year when you add competitive research from answer engine optimization (AEO) (I’ll use this acronym interchangeably with AI search) into your organic strategy – and not just because your executives or clients are clamoring for it (although I’m guessing they are).

This article breaks down the distinct roles of SEO and AEO competitive research, the tools used for each, and how to turn those insights into clear, actionable next steps.

SEO competitive research benefits vs. AEO competitive research benefits

Traditional SEO research is great for content planning and development that helps you address specific keywords, but that’s far from the whole organic picture in 2026.

Combined, SEO and AI competitive research can give you a clear strategy for positioning and messaging, content development, content reformatting, and even product marketing roadmapping. 

Let’s start with the tried-and-true tools of traditional SEO research. They excel at: 

  • Demand capture.
  • Keyword-driven intent mapping.
  • Late-funnel and transactional discovery.

A few years ago, pre-ChatGPT and the competitors that followed, SEO research was the foundation of your organic strategy. Today, those tools are a vital piece of organic strategy, but the emergence of AI search has shifted much of the focus away from traditional SEO. 

Now, SEO research should be used to:

  • Support AI visibility strategies.
  • Validate demand, not define strategy.
  • Identify content gaps that feed AI systems, not just SERPs.

AEO tools cover very different parts of the customer journey. These include:

  • Demand shaping.
  • Brand framing and recommendation bias.
  • Early- and mid-funnel decision influence.

AEO tools operate before the click, often replacing multiple SERP visits with a single synthesized answer. They offer a new type of research that’s a blend of voice-of-customer, competitive positioning, and market perception. That helps them deliver tremendous competitive insights into: 

  • Category leadership. 
  • Challenger brand visibility. 
  • Competitive positioning at the moment opinions are formed.

Let’s break this down a little further. Organic search experts can use insights from AI search tools to:

  • Identify feature expectations users assume are table stakes.
  • Spot emerging alternatives before they show up in keyword tools.
  • Understand where top products are or are not visible for relevant queries in key large language models (LLMs).
  • Understand why users are advised not to choose certain products.
  • Validate whether your product roadmap aligns with how the market is being explained to users.

Dig deeper: How to use competitive audits for AI SERP optimization

SEO vs. AEO research tools

Aside from adding AEO functionality (leaders here are Semrush and Ahrefs), SEO research tools essentially function in much the way they did a few years ago. Those tools, and their uses, include:

Ahrefs

Ahrefs is a great source of info for, among other things: 

  • Search traffic.
  • Paid traffic.
  • Trends over time.
  • Search engine ranking for keywords.
  • Topics and categories your competitors are writing content for.
  • Top pages.

I also like to use Ahrefs for a couple of more advanced initiatives: 

  • High-level batch analysis provides a fast overview of backlinks for any list of URLs you enter. This can give you ideas about outreach – or content written strategically to appeal to these outlets – for your backlinks strategy. 
  • Reverse-engineering a competitor’s FAQs allows you to see potentially important topics to address with your brand’s differentiators in mind.
    • To do this, go to Ahrefs’ Site Explorer, drop in a competitor domain, and then click on the Organic Keywords report. 
    • From here, you’ll want to filter out non-question keywords. The result is a good list of questions from actual users in your industry. You can then use these to tailor your content to meet potential customer needs.

Dig deeper: Link intent: How to combine great content with strategic outreach

BuzzSumo

BuzzSumo sends you alerts about where your competitors receive links from their public relations and outreach efforts. 

This is the same idea as the batch analysis, but it’s more real-time and gives you good insights into your competitors’ current priorities.

Semrush

Semrush is a super-useful tool for competitive research. 

You can use the domain versus domain tool to see what keywords competitors rank for with associated metrics. You can get insights on competitor keywords, ad copy, organic and paid listings, etc. 

Armed with all of this research, a fun content maneuver I like to suggest to clients is “[Client] vs. [Competitor]” pieces of content, particularly once they have some differentiators fleshed out to play up in their content. 

With this angle, I’ve gotten some great first-page rankings and reached users with buying intent.

Using their brand name might not always get you to rank above your competitor. Still, if you’re a challenger taking on bigger brands, it’s a good way to borrow their brand equity.

On the AEO side, I love tools with a heavy measurement component, but I also make a point of digging into the actual LLMs themselves, like ChatGPT and Google AI Mode, to combine reporting tools with source data.

This is similar to how my team has always approached traditional SEO research, which balances qualitative tools with extensive manual analysis of the actual SERPs.

Get the newsletter search marketers rely on.


The tools I recommend for heavy use are:

Profound

Profound is the most purpose-built AEO platform I’m using today. It focuses on how brands and competitors appear inside AI-generated answers, not just whether they rank in classic SERPs. Its insights help users: 

  • See which brands are cited or referenced in LLM answers for category-level and comparison queries.
  • Identify patterns in how competitors’ content is framed (e.g., default recommendation, alternative, warning, etc.). 
  • Understand which sources LLMs trust (e.g., documentation, reviews, forums, owned content).
  • Track share of voice within AI answers, not just blue links.

All of these insights help to move competitive research from the simple question of “who ranks” to the more important answer of “who is recommended and why.”

Ahrefs

Ahrefs remains a foundational tool for traditional SEO research, but its insights primarily reflect what ranks, not what gets synthesized or cited by AI systems.

They have, however, built in some new AI brand tracking tools worth exploring.

ChatGPT

ChatGPT is invaluable as a qualitative competitive research layer. I use it to: 

  • Simulate how users phrase early-stage and exploratory questions.
  • Compare how different competitors are summarized when asked things like: “What’s the best alternative to X?” or “Who should use X vs. Y?” 
  • Identify language, positioning, and feature emphases that consistently show up across responses. 
  • Test messaging.
  • Compare narratives with competitors.
  • Identify where your brand’s positioning is unclear or has gaps.

Google AI Mode

This tool is the clearest signal we have today of how AI Overviews will impact demand capture. It provides insight into: 

  • Which competitors are surfaced before any traditional ranking is visible. 
  • What sources Google synthesizes to build its answers.
  • How informational, commercial, and navigational queries blend. (This is especially important for mid-funnel queries where users previously clicked multiple results but now receive a single synthesized answer.)

Reddit Pro

This resource combines traditional community research with AI-era discovery. 

Because Reddit content is disproportionately represented in AI answers, this has become a first-class competitive intelligence source, not just a qualitative one. It helps to surface: 

  • High-signal conversations frequently referenced by LLMs. 
  • Common objections, alternatives, and feature gaps discussed by real users.
  • Language that actually resonates with people – and insight which often differs from keyword-driven copy.

Dig deeper: How to use advanced SEO competitor analysis to accelerate rankings & boost visibility

How to take action on your organic competitive research insights

Presenting competitive insights to clients or management teams in a digestible package is a good start (and may make its way up to the executive team for strategic planning). 

But where the rubber really meets the road is when you can make strong recommendations for how to use the insights you’ve gathered. 

Aim for takeaways like:

  • “[Competitor] is great at [X], so I suggest we target [Yy.”
  • “[Competitor] is less popular with [audience], which would likely engage with content on [topic].”
  • “[Competitor] is dominating AI search on topics I should own, so I recommend developing or refining our positioning and building a specific content strategy.”
  • “I’ve built a matrix showing the competitor product pages that draw more visibility in LLMs than our top-selling products. I recommend we focus on making those product pages more digestible for AI search and tracking progress. If we get traction, I recommend we identify the next tranche of product pages to optimize and proceed.” 

Ultimately, your clients or teammates should be able to use your insights to understand the market and align with you on priorities for initiatives to expand their footprint in both traditional and AI search. 

The in-house vs. agency debate misses the real paid media problem by Focus Pocus Media

For years, conversations about paid media have revolved around one question: should companies build in-house teams or outsource to agencies?

That debate makes sense, but it misses the real issue. The problem isn’t where paid media sits in the org chart. It’s how performance leadership is structured.

Many companies run Google Ads and other paid channels with capable teams, solid budgets, and documented best practices. Campaigns are live. Dashboards are full. Optimizations happen on schedule. Yet:

  • Results stall. 
  • Pipelines flatten. 
  • Budgets get questioned. 
  • Confidence in paid advertising erodes.

This is rarely a talent issue. It’s usually a structural one.

The plateau most in-house teams eventually hit

Across dozens of B2B paid media accounts, from SaaS to service businesses spending five figures a month, we see the same pattern.

Performance does not collapse overnight. It slows gradually.

Campaigns keep running. Costs look stable. Leads still come in. But growth stalls. Leadership sees motion without insight. Decisions turn reactive. Paid media shifts from a growth engine to a cost center that has to defend its existence.

The gap isn’t effort or execution. Over time, strategy narrows when teams work in isolation.

Why ‘more headcount’ rarely fixes the problem

When performance stalls, the default response is to hire. A new specialist. A channel owner. A more senior role.

Extra resources can ease the workload, but headcount alone rarely fixes the real problem. 

In in-house teams, three challenges are consistent:

1. Tracking and leadership visibility

Leadership teams often lack a clear, shared view of how paid media drives pipeline and revenue. The data exists, but it’s scattered across disconnected platforms, tools, and dashboards. 

Without strong integrations, even well-run campaigns operate with weak feedback loops, limiting how much they can improve.

2. Structure and skill ceiling

Many teams try to follow proven best practices. The issue isn’t intent. It’s context. What works for one company or growth stage can be ineffective, or even harmful, for another. 

Without external benchmarks or fresh perspectives, teams struggle to see what actually applies to their business.

3. Lack of systematic testing

Day-to-day execution eats up available capacity. Teams focus on keeping things stable instead of pushing performance forward. Testing starts to feel risky, even though real gains usually come from the few experiments that work.

Over time, this creates the illusion of optimization: steady activity without meaningful progress.

The same mistake happens before ads ever launch

These structural issues don’t just affect companies already running paid media. They often show up earlier, before the first campaigns even launch.

In many B2B organizations, paid advertising enters the picture when growth from outbound sales, partnerships, or organic channels starts to slow. 

Budgets roll out cautiously. Execution gets delegated. Results are expected to emerge from platform defaults.

What’s usually missing is strategic ownership:

  • Clear definitions of success that go beyond surface-level metrics
  • Tracking that ties spend to pipeline, not just lead volume
  • A testing roadmap aligned with revenue goals

Without this foundation, early results disappoint. Budgets get cut. Confidence fades. Paid media gets labeled ineffective before it has a real chance to work.

Ironically, this early phase is where external perspective can deliver the greatest long-term impact. It’s also when companies are least likely to seek it.

The structural advantage of outsourced performance leadership

Outsourcing is often framed as a way to cut costs or add execution power. In reality, its biggest advantage is perspective.

External performance teams work across many accounts, industries, and growth stages. They:

  • Spot patterns earlier. 
  • Know when platform recommendations favor spend growth over business outcomes. 
  • Question assumptions internal teams may have stopped challenging.

That outside view matters most in areas like tracking architecture, platform integrations, and account structure, where partial best-practice adoption can quietly erode performance.

A common scenario looks like this: 

  • Teams follow platform guidance but leave underlying martech gaps unresolved. 
  • Systems don’t talk to each other. 
  • Optimization signals weaken. 
  • Budget efficiency drops, even though campaigns appear fully compliant.

When outsourcing actually works — and when it doesn’t

Outsourcing isn’t a cure-all. It breaks down when companies expect external partners to fix performance in isolation, or when strategy and execution live in separate worlds.

It works best as a hybrid model:

  • Internal teams own execution and business context
  • External experts bring strategic direction, structural resets, and ongoing challenge

In this setup, partners don’t replace teams. They raise the bar.

That’s why a specialized Google Ads agency creates the most value when the goal isn’t just running campaigns, but turning paid media back into a predictable, scalable growth lever.

A smarter model: External strategy, internal execution

High-performing organizations are increasingly separating strategy from execution volume.

They bring in outside expertise not because something is broken, but because they want:

  • Objective assessments of performance and structure.
  • Stronger attribution and tracking foundations.
  • Disciplined experimentation frameworks.
  • Clear accountability at the leadership level.

This approach builds momentum before budgets get cut, not after results decline. It also helps leadership understand why paid media performs the way it does, restoring confidence in the channel.

What high-performing companies do differently

Organizations that avoid long plateaus tend to:

  • Treat paid media as a system, not a standalone channel.
  • Invest early in clear tracking and strong integrations.
  • Invite external challenge before performance slips.
  • Accept that most tests will fail, knowing the few wins will compound.

In this context, outsourcing isn’t about cost efficiency. It’s about preserving strategic sharpness as platforms and markets evolve.

Final thought

The in-house versus outsourced debate reduces a deeper issue: who owns performance direction, and how often it gets challenged?

As paid media platforms automate and evolve, the companies that sustain growth aren’t the ones with the biggest teams. They’re the ones with the clearest perspective.

Kirk Williams discusses why client fit is very important

On episode 339 of PPC Live The Podcast, I speak to Kirk Williams, a long-time PPC professional who’s been in the industry since 2009. Kirk is the founder of Zato, a specialist PPC micro-agency, and the author of Ponderings of a PPC Professional and Stop the Scale. He’s also a familiar face on the global conference circuit, speaking at events like BrightonSEO, SMX, HeroConf, and more.

The big f-Up: Taking on the wrong clients

Kirk’s biggest mistake wasn’t a platform error or a bad bid — it was taking on clients who weren’t a good fit.

He explains that these decisions often came during moments of pressure: wanting to grow quickly, dealing with client churn, or navigating tougher economic periods. In those moments, warning signs were present, but ignored.

The result? Short-lived client relationships that drained time, energy, and morale.

Why “bad fit” clients are so costly

Kirk is careful to define “bad” not as morally wrong, but simply misaligned. A poor fit client creates several hidden costs:

  • Emotional tax: Team members become drained by friction, conflict, and constant tension.
  • Time tax: More calls, more explanations, more conflict resolution.
  • Financial tax: Reduced profitability and, in some cases, refunded fees just to exit cleanly.

Over time, these costs compound and take focus away from clients where the agency can truly deliver value.

Red flags Kirk wishes he’d acted on sooner

Looking back at one particular client, Kirk shares several early warning signs he now takes far more seriously:

  • Emotionally immature communication during discovery
  • Aggressive or defensive reactions to pricing discussions
  • Lack of respect for the agency as a separate business with its own boundaries
  • A mindset that the agency exists solely to “serve” the client

These behaviors often signal deeper issues that surface later as unrealistic expectations and ongoing conflict.

Fit is about personality and expectations

Kirk emphasizes that fit isn’t only about whether someone is “nice.” You can have a pleasant contact who still isn’t a good match.

A major issue arises when clients expect PPC to outperform what the channel is realistically capable of delivering. If a business believes Google Ads alone should drive all growth — without brand, CRO, or other marketing channels — the relationship is set up to fail.

When expectations and reality don’t align, no amount of optimization will fix it.

The industry fit reality check

Some industries and client types simply aren’t a fit for every agency. Kirk openly shares that he avoids legal clients, not because they’re “bad,” but because the typical communication style and expectations don’t align with how he and his team work.

Fit is personal. Knowing who you don’t want to work with is just as important as knowing who you do.

The discovery process as a detective exercise

To solve the client-fit problem, Kirk overhauled his discovery process. Instead of selling first, he focuses on understanding.

Key areas he probes:

  • Why the prospect is looking for an agency now
  • How they believe PPC fits into their overall marketing strategy
  • Whether they understand trade-offs between scale and efficiency
  • What they disliked — and liked — about their previous agency

One standout question: “What’s something you liked about your last agency?”
If a prospect can’t answer it, that’s often a signal of unrealistic expectations rather than poor past performance.

Asking better questions improves sales, too

Counterintuitively, Kirk says deeper discovery doesn’t hurt sales — it improves them. Prospects can sense genuine curiosity and alignment. By the time pricing is discussed, both sides already understand whether the relationship makes sense.

The result is fewer rushed decisions, fewer failed engagements, and far stronger long-term partnerships.

PPC isn’t a standalone growth strategy

Both Anu and Kirk reinforce a critical point: PPC cannot — and should not — carry an entire business on its own.

Paid search works best as part of a broader marketing ecosystem that includes brand, product, customer experience, and other channels. When clients expect PPC to do “all the heavy lifting,” it’s a structural problem, not a performance one.

Final thoughts: protect your team and yourself

The biggest takeaway from this episode is simple but powerful: vetting clients is a mental health strategy as much as a business one.

Strong discovery processes protect agencies, consultants, and in-house teams from burnout, resentment, and constant uphill battles. Saying “no” early can be far healthier — and more profitable — than saying “yes” to the wrong opportunity.

💾

Taking on the wrong clients can quietly drain time and results, and this episode explains how to spot red flags early.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Summary We are seeking an experienced Marketing and Communications Manager to join our team in Austin, Texas. The ideal candidate will possess extensive integrated marketing experience, a passion for detail, and a proven ability to drive growth in a scale-up or high-growth environment. This role is best suited for a candidate who has experience […]
  • Who We Are Are you passionate about hard problems, driven people, and adding your creative touch to everything you do? At Sock Club, we deliver memorable experiences and quality custom products in innovative ways. We’ve worked with brands like Google and LinkedIn, and artists like Bad Bunny, Billie Eilish, and Kacey Musgraves. We believe no […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • Brazy.gg, a global iGaming holding company, is searching for a Middle SEO / Linkbuilding Specialist (iGaming). About the Project We are building a startup media network of content-driven websites focused on online casinos, slots, bonuses, and regulated iGaming markets. The project is already launched and will shortly expand across 4+ Tier 1 GEOs (Europe) and […]
  • Digital Marketing Associate (this is not a virtual position) Description JLB – An Inc. 5000, award winning marketing agency that is one of the fastest growing private businesses in the Nashville marketplace (www.jlbworks.com) based in Franklin TN. Become a part of an innovative team helping revolutionize the way Internet marketing services are developed and delivered […]
  • Job Description Job Type: Full-Time, Hourly Starting Pay Range: $18.50 – $21.00/hour Working Hours: Monday – Friday, 8 AM  – 5 PM Location: Greenville, SC 29615​​ Minimum Experience: At least one year of experience using eCommerce platforms, performing data entry, or working in digital marketing Work Environment:  Office Setting, on-site Moderate to High Paced Work […]
  • Job Description Reports To: Vice President of Demand Generation and Business Development Work Location & Flexibility: We are headquartered in Washington, D.C., and this role is eligible for remote work from the following states: AR, AZ, CA, CO, CT, DC, DE, FL, GA, IL, IN, KS, KY, LA, MA, MD, MI, MN, NC, NH, NJ, […]
  • Job Description Integrated Water Services (IWS) is revolutionizing water! For over 20 years, we have been at the forefront of transforming water challenges into sustainable solutions. Our team of passionate innovators designs and builds cutting-edge water and wastewater treatment systems that protect communities and the environment. From bustling cities to remote regions, we are making […]
  • Are you ready to trade your job for a journey? Become a FlyMate! Passion, excitement & global collaboration are all core to what it means to be a FlyMate. At Flywire, we’re on a mission to deliver the world’s most important and complex payments. We use our Flywire Advantage – the combination of our next-gen […]
  • Job Description Join AdPro 360 as our Full-Time Digital Marketing Specialist and embrace an exciting opportunity that puts your creativity and strategic thinking to the test. This role offers the flexibility to work from home, allowing you to balance your professional ambitions with personal priorities in a dynamic and supportive environment. With a competitive salary […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

Other roles you may be interested in

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Social Media Editor, Inside Hook(Hybrid, New York, NY)

  • Salary: $66,000 – $70,000
  • Brainstorm, manage and develop the content schedule, briefs, and assets for social channels
  • Support in timely content delivery, scheduling and postings

Performance Marketing Manager, E.S. Kluft & Company (Rancho Cucamonga, CA)

  • Salary: $100,000
  • Define a comprehensive paid and earned strategy, in line with the overall company and marketing strategy.
  • Effectively manage and communicate with multiple agencies and partners (media, PR).

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Google tests third-party endorsements in search ads

Why phrase match is losing ground to broad match in Google Ads

Google is experimenting with showing third-party endorsement content directly within Search ads.

The test places short endorsements from external publishers under the ad description, including the third party’s name, logo, and favicon.

What’s showing up. The test was first spotted by Sarah Blocksidge, Marketing Director at Sixth City Marketing, who shared a screenshot on Mastodon. In the example, a Search ad included the line “Best for Frequent Travelers,” attributed to PCMag, complete with the publication’s favicon.

The endorsement appears directly beneath the ad copy, visually separating it from standard advertiser-written text.

Why we care. If rolled out more broadly, the change could make Search ads feel more like product reviews — and potentially give advertisers with strong third-party validation a new advantage in crowded auctions.

What Google says. A Google Ads spokesperson confirmed the test, calling it “a small experiment” –

  • “This is a small experiment we are currently running that explores placing third-party endorsement content on Search ads.”

Google did not provide details on eligibility, sourcing, advertiser controls, or how endorsements are selected.

What we don’t know yet. It’s unclear whether advertisers can opt into the feature, request specific endorsements, or influence which third-party sources appear. Google also hasn’t said whether the test is tied to existing review extensions, publisher partnerships, or broader trust and safety initiatives.

What to watch. If Google expands the experiment, third-party credibility could become a more visible factor in ad performance — shifting emphasis from advertiser claims to external validation at the point of search.

For now, the test appears limited, but it offers a glimpse at how Google may continue blending ads, trust signals, and editorial-style context in search results.

Dig Deeper. Screenshot shared on Mastadon.

What 2 million LLM sessions reveal about AI discovery

Fragmented Discovery

We analyzed nearly two million LLM sessions across nine industries from January through December 2025. We started with a simple assumption: ChatGPT dominates, usage patterns are uniform, and the volume is small and inconsequential.

The data proved us wrong.

ChatGPT commands 84.1% of trackable AI discovery traffic, but it functions primarily as the default tool for broad-market discovery. That reality changes the strategy.

Brands can no longer rely on a single, discovery-first approach. You need a multi-platform strategy that aligns with how users expect to be productive at different moments.

Success now depends on knowing which platforms actively enable user productivity and which simply support early discovery.

Different LLMs are winning in different industries, often by wide margins. The takeaway for 2026 is more nuanced than “focus on ChatGPT.”

Here’s what the data reveals.

The growth rate divergence: ChatGPT vs. everyone else

From January to December 2025, the major LLM platforms grew at very different rates:

  • ChatGPT: 3x growth
  • Copilot: 25x growth
  • Claude: 13x growth
  • Perplexity: 1x growth
  • Gemini: 1x growth

ChatGPT grew, but Copilot and Claude grew eight to 10 times faster. Perplexity and Gemini effectively flatlined, or, more accurately, reinforced usage within tightly defined knowledge workflows.

These aggregate numbers reflect deeper strategic priorities.

  • Satya Nadella publicly highlighted Copilot reaching 100 million monthly users.
  • Dario Amodei announced that Anthropic’s revenue grew from $100 million to $8–10 billion in under two years.
  • Aravind Srinivas posted that he’s “really encouraged by the interest in Perplexity Finance,” even positioning it as an alternative to Bloomberg Terminal.

These CEOs are focused on growth because growth signals real user value:

  • Copilot wins by serving Microsoft ecosystem users.
  • Claude wins with developers.
  • Perplexity wins with finance professionals.

Different LLMs are winning different industries at dramatically different rates.

Pattern 1: Copilot dominates where work happens

Copilot’s 25x aggregate growth is striking, but the industry breakdown makes the pattern obvious. Copilot wins in B2B verticals where work already happens inside the Microsoft ecosystem.

SaaS

  • ChatGPT: 2x growth
  • Copilot: 21x growth
  • Copilot adoption mirrors how modern SaaS teams operate. Companies embed LLMs directly into workflows to extract insights from proprietary and third-party data, driving efficiency, personalization, and product innovation inside Microsoft tools.

Education

  • ChatGPT: 6x growth
  • Copilot: 27x growth
  • Copilot benefits from a culture of knowledge sharing and research synthesis. Institutions and publishers cite, expand, and contextualize existing material, making LLM-assisted discovery a natural extension of how educational content is created and consumed.

Finance

  • ChatGPT: 4.2x growth
  • Copilot: 23x growth
  • Finance aligns strongly with Copilot because many tasks are automated and context-dependent. Analysts need models that can source, reconcile, and reason across authoritative reports, filings, and datasets inside trusted environments.

The key insight isn’t just Copilot’s growth. It’s where that growth occurs. Copilot accelerates fastest in industries where professionals already depend on Microsoft tools to analyze data, synthesize knowledge, and complete tasks.

A finance analyst doesn’t leave Excel to “search.” They ask Copilot to interpret, compare, and contextualize data in place. A content or product strategist doesn’t open a new tab to research competitors. They prompt Copilot inside their working environment.

What it means

If your audience lives within enterprise workflows — SaaS teams, financial professionals, educators, and B2B decision-makers — AI discovery is moving into LLMs as work happens. Visibility is no longer won during early research. It’s won during execution, when intent is highest and decisions are already forming.

Pattern 2: Perplexity only survives in finance

Perplexity’s overall growth sits at 1.15x, effectively flat. But when you isolate finance, a different picture emerges.

In finance, Perplexity holds a 24% market share.

This is the only industry where Perplexity maintains meaningful, sustained traffic. Everywhere else, its share has collapsed:

  • SaaS: down from 14.9% to 7.3%
  • E-commerce: down from 13.9% to 3.4%
  • Education: down from 28.5% to 5.2%
  • Publishers: down from 41.5% to 3.6%

Finance behaves differently because financial decisions demand verification.

When users compare investment platforms, evaluate loan terms, or research compliance requirements, a single synthesized answer isn’t enough. They need citations they can trace directly back to source documents.

Perplexity is built for this use case. Through partnerships with Benzinga, FactSet, Morningstar, and Quartr, it provides direct access to earnings transcripts, SEC filings, analyst ratings, and real-time market data.

Its Enterprise Finance product adds scheduled market updates, custom answer engines, and live data visualizations. These features serve professionals who require auditable, institutional-grade information, not just fast answers.

Every answer includes visible sources that users can click to verify each claim.

In most categories, convenience wins. In finance, trust and verifiability are non-negotiable.

What it means

Success in AI discovery means choosing the right platform for your users and being present in the sources and citations the models themselves trust.

Financial responses rely on networks of licensed data, institutional partners, and authoritative third-party references. If your brand isn’t visible, cited, and validated inside those ecosystems, you won’t surface, no matter how strong your content is.

Optimization now means earning relevance across the full web of sources each model draws from, not just ranking in a single interface.

Pattern 3: Claude dominates standalone analysis

Claude represents just 0.6% of total AI discovery traffic, which makes it easy to dismiss. But where that 0.6% concentrates is revealing. Claude wins with professionals who research, write, and analyze, not consumers who shop.

  • Publishers: 49x growth
  • Education: 25x growth
  • Finance: 38x growth
  • SaaS: 10.3x growth

Why does Claude win in these verticals when Copilot already dominates knowledge work?

The difference is the type of work. Copilot lives inside operational tools like Excel, Word, and PowerPoint, helping professionals execute tasks within existing workflows. Claude is where professionals go for standalone strategic thinking.

  • A publisher uploads an 80,000-word manuscript and asks, “Is this argument coherent across chapters three through seven?”
  • A finance analyst uploads three years of earnings transcripts and asks, “How has management’s language around capital allocation changed?”
  • A developer pastes an entire legacy codebase and asks, “Map the data flow and identify architectural bottlenecks.”

Claude’s 200,000-token context window enables this. The value isn’t efficiency inside a workflow. It’s having a reasoning partner for work that requires synthesis, critique, and strategic judgment.

What it means

If you target technical audiences or strategic decision-makers, Claude optimization demands analysis-grade content. Publish deep case studies with clear methodology and detailed implementation paths, not 500-word summaries.

Structure content for reasoning. Use explicit frameworks and comparative analysis. The audience is smaller, but the influence is higher. A developer who uses Claude to deeply analyze your API documentation becomes an internal champion.

Pattern 4: The Gemini measurement crisis

Gemini’s tracked traffic tells a confusing story:

  • Education: −67% tracked traffic
  • SaaS: +1.4x growth
  • Finance: +1.3x growth
  • E-commerce: +2.7x growth

This likely isn’t a user decline. It’s an attribution collapse.

Over the past 13 months, Gemini has increasingly kept users inside its interface. It delivers AI-generated answers without prominent, clickable source links. Users research, absorb the answer, and either convert directly or search brand names later. That journey never shows up as AI discovery.

Google still controls the largest search distribution network in the world, and Gemini is deeply embedded in it. It’s unlikely Gemini users are abandoning AI discovery while ChatGPT grows 3x and Copilot grows 25x.

What’s more plausible is that Gemini-driven discovery still exists, but it’s becoming invisible.

Unlike Perplexity, which surfaces sources, or Copilot, which operates inside traceable workflows, Gemini synthesizes answers and retains users in Google’s ecosystem.

A user asks Gemini about project management software, gets a complete answer, then searches “[your brand]” days later. Analytics record branded search, not AI influence.

This creates a real strategic risk.

The commonly cited “0.13% AI penetration” metric is almost certainly understated. If even 30% to 40% of Gemini-assisted discovery goes untracked, true AI-driven research volume could be two to three times higher than what we can measure.

What it means

  • Monitor branded search lift alongside AI optimization efforts.
  • Build measurement models that account for multi-session, cross-platform journeys.
  • Invest in brand strength and recall, not just clicks.
  • Track time-lagged conversions as research and conversion drift further apart.

Last-click attribution is breaking. AI-assisted conversions — where users research in one system, synthesize in another, and convert through branded or direct search — are becoming the default. Flat or declining Gemini traffic likely signals measurement failure, not user absence.

How to choose your LLM strategy based on your audience

AI discovery isn’t consolidating around a single platform. It’s fragmenting by industry, use case, and user intent.

  • If your audience works in enterprise environments: Copilot is where discovery happens. SaaS buyers, financial analysts, educators, and B2B decision-makers research inside Microsoft tools like Excel, Outlook, and Teams. Discovery occurs at the moment decisions form, not during separate “research” sessions.
  • If your audience makes high-stakes decisions: Perplexity matters. Finance is the only industry where a secondary platform holds a 24% share alongside ChatGPT. These users need citations, not synthesis. Optimization means earning visibility inside institutional data networks such as FactSet, Morningstar, and financial news, not just ranking in the interface.
  • If your audience includes technical evaluators: Claude’s 0.6% share understates its influence. Developers, strategists, and researchers use it for deep analysis by uploading full documents and datasets. They are fewer, but they shape buying committees. Content must go deep: detailed case studies, clear methodology, and analysis-grade research.
  • If you’re in an emerging category: Legal, events, and insurance show 15x to 90x growth because AI discovery just arrived. Start with ChatGPT’s broad reach, then watch for platform migration as your audience matures.
  • If measurement is breaking: Gemini’s declining tracked traffic likely reflects attribution collapse, not user loss. Monitor branded search lift. Track time-lagged conversions. Build models that account for multi-session, cross-platform journeys.
  • Across all categories: Expect attribution gaps. Traditional last-click attribution is breaking as AI-assisted conversions become the norm.

The future of AI discovery isn’t about ranking on ChatGPT alone. It’s about understanding where your audience discovers and which platforms actually serve their needs.

The full study. 2025 State of AI Discovery Report: What 1.96 Million LLM Sessions Tell Us About the Future of Search

7 custom GPT ideas to automate SEO workflows

7 custom GPT ideas to automate SEO workflows

Custom GPTs can help SEO teams move faster by turning repeatable tasks into structured workflows.

If you don’t have access to paid ChatGPT, you can still use these prompts as standalone references by copying them into your notes for future reuse. You will need to tweak them for your team’s specific use cases, because they are intended as a starting point.

Working with AI is largely trial and error. To get better at writing prompts, practice with small tasks first, iterate on prompts, and take notes on what gets you good outputs. 

AI also tends to ramble, so it helps to give strict guidelines for formatting and to specify what not to do. You can upload resources and articles to follow and provide clear context, such as defining the role and audience upfront.

The seven prompts below are designed to help you start building custom GPTs for planning, analysis, and ongoing SEO work.

1. Project plan GPT

Using past examples of project plans, create a GPT that will help you make a draft for this year’s focus areas.

How to set it up

  • Input project plans from previous years.
  • Give it a specific format to follow.
  • Consider how many items or sections to include.
  • Add specific details based on you or your team.
  • (Optional) Copy notes and feedback from your team or retrospective.

Example prompt

Based on last year’s project plan, make my project plan for this year. Here are the focus areas and problem areas to include.

Give me a bulleted list with the three most important items for me (or my team) to focus on for each quarter of this year. At least one item should cover link building.

Include a one-sentence summary of why you recommend each item and at least two KPIs to measure success.

[Insert last year’s plan.]

Now poke holes in your plan. Give me three reasons I should not focus on these items based on the risks. Include sources for your notes.

Dig deeper: How to use ChatGPT Tasks for SEO

2. Site performance GPT

Hook up your performance dashboards or custom GA reports to ChatGPT and let it do the initial legwork in identifying issues. Then make a list of items to investigate yourself.

How to set it up

  • Connect your reporting tools or upload reports directly.
  • Give specific direction for what to look for.
  • Include the cadence you want to look at, like a daily or weekly report.
  • Give examples of types of pages or categories to compare.

Example prompt

Here is the weekly site report. Give me your analysis of how the site performed compared to last week. Include a three-sentence summary of the sessions, conversions, and engagement.

List three wins and three misses in bullet format. Color-code each item based on how good or bad each item is.

[Insert report doc.]

3. Competitor analysis GPT

Check out what’s working and what’s not on competitor sites and get insights for yours. It’s most helpful to connect to a tool like Semrush or Ahrefs. 

How to set it up

  • Connect tools like Ahrefs or Semrush, or upload a report.
  • Identify competitors to analyze and top pages and folders.
  • List key metrics to compare.
  • Set up unique prompts for page, keyword/topic, folder, and domain-level comparison.
  • (Optional) Create documentation on identifying which metrics to dig deeper.

Example prompt

You are an SEO analyst performing competitor analysis to identify areas to improve your website. Check out these URLs and compare them. Give me a table with each URL in the rows and these columns: backlinks, average rank, top keyword, sessions, and estimated value.

Below that, give a two-sentence summary of who wins in each category and why. Use the criteria in this link to make your judgments, citing sources for each.

URL 1: 
URL 2: 
URL 3: 
Article reference:

Dig deeper: How to use advanced SEO competitor analysis to accelerate rankings & boost visibility

Get the newsletter search marketers rely on.


4. SERP analyzer GPT

AI has gotten much better over the last few months at analyzing images. Plug in SERP screenshots from your own searches and compare it to a web search from the GPT. Build this into a competitive SERP landscape analysis to see things like who appears in both searches vs. only one.

How to set it up

  • Identify search results and keywords to compare.
  • Take screenshots in incognito mode for comparison.

Example prompt

Do a web search for [your keyword here]. Show me what you are seeing in the search results.

Compare it with this screenshot and list the differences. Then include a bulleted list of what the results seen most often have in common.

Dig deeper: How to build a custom GPT to redefine keyword research

5. UX GPT

Turn your design or UX team’s resources into an easy-to-use helper. This is especially helpful for editorial teams that do not want to search through endless documentation for quick advice.

How to set it up

  • Upload your team’s documentation or your favorite UX articles.
  • Find pages with poor bounce or engagement stats.
  • Integrate the tool into standard page updates.

Example prompt

You are an SEO writer working on improving user engagement. Open this page. Check to make sure it follows all of our design rules.

List each violation, along with a source, explaining what is wrong and what to do instead. Then check to see whether there are any relevant page template patterns from the brand book that could apply to this type of page.

6. Tech SEO check GPT

Set up a daily or weekly tech SEO check to do the bulk of the analysis for you. 

How to set it up

  • Connect any tools like Google Search Console, or upload reports.
  • List the top metrics to check, like Core Web Vitals, page speed, and console errors.
  • Identify top pages to run a more comprehensive check.
  • Set up reminders to run it daily or weekly, or connect it to Slack to export results directly.

Example prompt

Based on the latest CWV report, identify problem pages that need a speed improvement audit. Create the list in a table, with the URLs in rows and columns for speed, issues identified, and suggested fixes. Make a separate list of pages that have improved, along with the actual scores.

Dig deeper: A technical SEO blueprint for GEO: Optimize for AI-powered search

7. Presentation GPT

While ChatGPT cannot directly create slides yet without an add-on or third-party connector, it can create the content for you to paste into your slides. Combine it with your performance, testing, tech SEO, and competitor GPTs for a well-rounded summary of overall site status with relevant context.

How to set it up

  • Gather data from your other GPTs.
  • Choose the ones to present.
  • (Optional) Upload past presentations for reference.

Example prompt

Pretend you are setting up a slide deck. The audience is other members of the SEO team. Format this summary from my Performance GPT into a slide.

Give me a header, subheader, and key bullets and takeaways. The tone should be straightforward but professional. Limit bullets to one line. Round all numbers to zero decimals. Suggest three examples of imagery and graphics to use.

[Insert summary.]

Dig deeper: How to balance speed and credibility in AI-assisted content creation

Where custom GPTs fit into day-to-day SEO work

Custom GPTs are most useful when they sit alongside the tools and processes SEO teams already use. Rather than replacing dashboards, audits, or documentation, they can handle first passes, surface patterns, and standardize how work gets reviewed before a human steps in.

Used this way, the prompts in this article are less about automation for its own sake and more about reducing friction in common SEO tasks, from planning and reporting to SERP analysis and technical checks.

Is SEO a brand channel or a performance channel? Now it’s both

Is SEO a brand channel or a performance channel? Now it’s both

For a long time, SEO had the simplest math in marketing:

  • Rank higher → Get more traffic → Fill the sales pipeline

To the dissatisfaction of marketing executives, that linear world is breaking fast.

Between AI Overviews, zero-click SERPs, and users getting answers directly from LLMs, the old “rank to get traffic and leads” equation is failing. 

Today, holding a top keyword position often yields significantly fewer clicks than it did just two years ago.

This has forced many uncomfortable conversations in boardrooms. CMOs and CEOs are looking at traffic dashboards and asking tough questions, especially:

  • “If traffic is down… how do we know SEO is actually working?”

The answer forces us to confront a hard truth: The traffic model has collapsed, but executives still want measurable ROI. 

We have to stop treating SEO like a traffic faucet and start treating it like what it actually is: a brand-dependent performance channel.

Why traffic and pipeline are no longer in lockstep

Linear attribution has never fully captured the reality of organic search. 

ChatGPT is not replacing Google; rather, it is expanding its use. 

And that’s because users are skeptical of search and LLM results, so they need to validate the information they find on both platforms. 

In the past, the research loop happened inside Google’s ecosystem (clicking back and forth between results).

Today, organic search behaves like a pinball machine. Buyers bounce across channels and interfaces in ways that traditional attribution software cannot track. 

A user might find an answer in an AI Overview, verify it on Reddit, check a competitor comparison on G2, and finally convert days later via a direct visit.

This complexity has broken the correlation marketing executives are hungry for. 

In the past, if you overlaid traffic and pipeline charts, the lines moved together. Now, they often diverge.

Across B2B SaaS portfolios, I am seeing a consistent pattern:

  • Organic sessions are flat or declining year over year.
  • Rankings for high-intent terms remain stable.
  • Pipeline and inbound demos from organic search are going up.
Traffic flat, revenue up

Dig deeper: How to explain flat traffic when SEO is actually working

This divergence doesn’t mean SEO is failing. It means that traffic is no longer a reliable proxy for business impact.

The traffic being lost to zero-click searches is often informational and low-intent. The remaining traffic is higher-intent and closer to conversion. 

We are witnessing the “atomization” of search demand. 

As Kevin Indig notes in his analysis of The Great Decoupling, demand for short-head, broad keywords is in permanent decline. 

Users are either bypassing search entirely for AI interfaces, or they are refining their queries into specific, long-tail questions that have lower volume but significantly higher intent.

The “fat head” of search – the generic terms that used to drive massive vanity traffic – is being eaten by AI. The long tail is where the pipeline lives.

The mistake many leaders make is seeing the sessions drop and instinctively pushing to “get the numbers back up.” 

But chasing lost clicks usually leads to publishing broad, top-of-funnel content that inflates session counts (and other vanity metrics) without actually driving qualified leads.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Get the newsletter search marketers rely on.


SEO ROI is now the downstream outcome of brand traction

This is where the debate between “brand” and “performance” breaks down.

For a decade, SEO masqueraded as a pure performance channel. 

We convinced ourselves that if we just optimized the H1s and built enough backlinks, we could rank for anything. 

We treated brand awareness as a nice bonus, but not a prerequisite.

In reality, SEO has always been downstream of brand. AI interfaces are simply exposing that truth.

The rise of LLM-based search has flipped the script. These engines don’t just match keywords to pages; they synthesize reputation.

When an LLM constructs an answer, it is looking for verification across the entire web:

  • What do actual customers say on G2 and Reddit?
  • Is the brand cited in expert, non-affiliate content?
  • Is the product mentioned alongside category leaders?

You cannot brute-force these outcomes via SEO techniques.

If your brand lacks digital authority, no amount of technical optimization will save you. That is why I call this brand-conditioned performance.

It means that your brand strength sets the ceiling for your organic performance. You can no longer out-optimize a weak reputation. 

The search engines are looking for consensus across the web, and if the market doesn’t already associate your brand with the solution, the algorithm won’t recommend you.

So, what does brand strength actually mean to an LLM? In this new environment, brand strength is composed of four specific signals:

  • Topical authority: Do you own the complete conceptual map of your industry, or just a few disconnected keywords?
  • Ideal customer profile (ICP) alignment: Are you answering the specific, messy questions your actual buyers ask, or just publishing generic definitions?
  • Validation: Are you cited by the category-defining sources that LLMs use as training data?
  • Positioning clarity: Can an AI clearly summarize exactly what you do? As Indig points out, “Vague positioning gets skipped; sharp positioning gets cited.”

Bottom line: SEO doesn’t create demand out of thin air. It captures the demand your brand has already validated. 

Dig deeper: The new SEO imperative: Building your brand

The new defensibility metrics for SEO

When traffic stops being the headline KPI, leadership still needs proof that SEO is working. 

The strongest teams are pivoting to defensible signals that track revenue and reputation rather than just volume.

We need to anchor on metrics that prove business impact, even if top-of-funnel sessions are leaking:

  • Top-10 rankings for commercial and BOFU keywords remain stable. (You hold the ground where money changes hands).
  • Ahrefs traffic value increases, even if sessions decline. (You are trading high-volume informational traffic for high-value commercial traffic).
  • Product, solution, and comparison page traffic stabilizes. (Buyers are still finding your money pages).
  • Homepage traffic grows YoY. (The strongest proxy for brand demand).
  • LLM referral traffic emerges and accelerates. (The newest frontier. Tracking referral sources from ChatGPT, Gemini, or Perplexity indicates that you are part of the new conversation, even if the volume is currently low.)
  • Inbound demos and pipeline from organic growth relative to traffic.

That last point is the one that changes executive thinking.

When you show that pipeline per organic visitor is rising – even as sessions fall – the conversation shifts from “SEO is broken” to “SEO is evolving.”

Dig deeper: Why AI availability is the new battleground for brands

Modern SEO is moving from acquisition to influence

The most successful SEO teams are no longer asking, “How do we get the traffic back?”

They understand that the game has changed from acquisition to influence. 

They are asking:

  • How does our brand show up for buying questions?
  • How do we dominate consideration-stage queries?
  • How do we turn organic visibility into real buying influence?

They recognize that in an AI-first world, zero-click does not mean zero-value.

If a user sees your brand ranked first in an AI Overview, reads a snippet that positions you as the expert, and remembers you when they are ready to buy – SEO did its job.

SEO is no longer a hack for cheap traffic; it is the primary way brands condition the market to buy.

How to optimize for AI search: 12 proven LLM visibility tactics

How LLMs see brands

One of the biggest SEO challenges right now isn’t AI. It’s the irresponsible misinformation surrounding it.

SEO isn’t dying — it’s evolving. That means it’s on us to understand how the industry is changing, and to be careful about who we listen to.

I’m not easily shocked, but some of the AEO (or GEO) talks I’ve seen over the past year have been genuinely eyebrow-raising — even for someone with Botox.

I still remember one speaker telling a room full of marketers they were “sorry for anyone still working in SEO,” then immediately recommending outdated tactics as the “secret sauce” for LLM visibility. It’s been… painful.

Thankfully, the adults have entered the room. This week, four of the industry’s most trusted voices — Lily Ray, Kevin Indig, Steve Toth, and Ross Hudgens — came together for a roundtable on the future of search. It was easily the most useful AEO session I’ve attended. Each shared specific tactics they’ve personally used to achieve LLM visibility.

Here’s what they had to say.

1. Advertorials work

LLMs don’t currently distinguish between paid and organic editorial. That means well-placed advertorials on reputable publishers can help brands show up in AI search, much like earned coverage. As with traditional PR, the publication’s credibility still matters most.

2. Syndication can scale visibility

Paid syndication can increase reach, but quality matters more than quantity. Focus on reputable, relevant publications and use this tactic carefully.

3. Map pages to every audience and use case you serve

Brands that create clearly defined pages for each audience, industry, and use case are better positioned as AI search becomes more personalized. This structure helps LLMs understand relevance and remains a strong SEO practice, with or without AI.

4. Homepage clarity

Your homepage should clearly communicate who you serve and what you do. LLMs parse homepage content far more easily than navigation menus, so relying on your nav to explain your offering is a missed opportunity.

5. Optimize your footer

Don’t overlook your footer. Brand and service signals placed here are being picked up by LLMs. Wil Reynolds shared a great case study showing how footer content can directly influence AI visibility.

6. Don’t prioritize llm.txt

Despite the speculation, no major LLM has confirmed using llm.txt files, and Google has explicitly said it does not. Your time and effort are better spent elsewhere.

7. Go multimodal

Repurpose your core content across text, video, audio, and imagery. The goal is to build brand recognition across the full range of sources an LLM may pull from.

8. Actively shape your brand narrative

Actively shape your brand narrative. It’s estimated that 250 documents are needed to meaningfully influence how an LLM perceives a brand. Brands that don’t publish and promote content consistently risk letting others define that narrative for them.

9. Freshness carries disproportionate weight

Recent content tends to perform especially well in AI search, reflecting LLMs’ preference for up-to-date information. That said, artificial “refreshing” without meaningful updates is a bad idea.

10. Social works fast

Posts on platforms like LinkedIn—including Pulse articles—can appear in AI search within hours, sometimes minutes, especially for accounts with strong followings. Reddit, YouTube, and other high-trust platforms show similar behavior.

11. Authority accelerates inclusion

Publishing on respected, niche industry sites can lead to rapid inclusion in LLM responses — sometimes within hours.

12. Don’t hide FAQs

FAQs should be visible and substantial, not hidden behind accordions. Don’t hold back on content either— eight to 10 well-answered questions can clearly signal expertise, intent, and relevance to both users and LLMs.

    Is AEO the same as SEO? 

    This much-debated question was addressed directly by John Mueller at Google Search Live in December. Putting the AEO cowboys in their place, he made it clear that good AEO still relies on good SEO:

    • “AI systems rely on search. and there is no such thing as GEO or AEO without doing SEO fundamentals. Tricks will come out and they will work for a short time, companies that want to be around for the long term should focus on something that is proven with long term stability and not tricks.” 

    The overlap makes sense when you look at how modern LLMs like GPT-5 actually work. They use Retrieval-Augmented Generation (RAG). Rather than relying only on frozen training data, RAG lets an LLM query search engines and trusted sources in real time before answering.

    Put simply: if you want LLM visibility, you need to show up in search first.

    So yes, good AEO is good SEO — but there’s nuance. The tactics above work right now, but they will inevitably evolve as LLMs continue to advance.

    The best AI search strategy for 2026

    Forget the magic button. Keep testing. Stay skeptical of the hype. And be selective about who you let into your ear — or your LinkedIn feed.

    Thanks to Bernard Huang and Clearscope for hosting this excellent panel.

    1/3rd of publishers say they will block Google Search AI-generative features like AI Overviews

    Google announced yesterday that it is exploring ways for sites to opt out of Google using their content for its AI-generative search features, such as AI Mode and AI Overviews. I asked the SEO community on X if they would opt out of these Google Search AI-generative features or not.

    The results. Of the over 350 responses that took the poll yesterday, most said they would not opt out. However, about 1/3 of respondents said they would block or opt out of these features. Here is the breakdown:

    Question: Would you block Google from using your content for AI Overviews and AI Mode?

    • 33.2% – Yes, I’d block Google
    • 41.9% – No, I wouldn’t block
    • 24.9% – I am not sure yet.

    Here is the actual poll:

    Would you block Google from using your content for AI Overviews and AI Mode – Google may be giving us more controls – take my poll below. https://t.co/60M3Vt0YlN

    — Barry Schwartz (@rustybrick) January 28, 2026

    How to opt out. We don’t know. Google only said it is “exploring” ways to handle this but has not provided any mechanism for this. So we don’t know how hard or easy it would be to opt out. The easier it is, the more likely sites will opt out; the harder, the less likely.

    Why we care. The true number of sites that might opt out of AI Mode or AI Overviews won’t be known until the mechanism is out to handle this. And trust me, there will be many reports on how many sites are opting out.

    Like recently, “Some 79% of almost 100 top news websites in the UK and US are blocking at least one crawler used for AI training out of OpenAI’s GPTBot, ClaudeBot, Anthropic-ai, CCBot, Applebot-Extended and Google-Extended,” reported The Press Gazette.

    My recommendation; once it is out, it is something you will want to test and see the results of opting out or opting in.

    Google Ads API v23 brings PMax data, richer invoicing, scheduling

    Your guide to Google Ads Smart Bidding

    Google released v23 of the Google Ads API, the first update of 2026. It marks the start of a faster release cadence.

    What’s new. The update adds deeper Performance Max reporting, more granular invoicing, AI-powered audience tools, expanded campaign controls, and more:

    • Performance Max transparency: Ad network type breakdowns are now available for PMax campaigns.
    • More detailed invoices: Campaign-level costs, regulatory fees, and adjustments can be retrieved via InvoiceService.
    • More precise scheduling: Campaigns can now use start and end date-times instead of date-only fields.
    • Local data access: Store location details are available through PerStoreView, matching the Stores report.
    • New audience dimension: LIFE_EVENT_USER_INTEREST enables life-event-based audience building in Insights tools.
    • Smarter Demand Gen planning: Conversion rate forecasts now vary by surface (e.g., Gmail, Shorts).
    • Generative AI audiences: Free-text audience descriptions can be translated into structured audience attributes.
    • Expanded Shopping metrics: New competitive and conversion metrics are available by conversion date.

    Why we care. A faster update cycle lets advertisers and developers access new capabilities sooner, especially as Google pushes deeper into automation, AI-driven planning, and cross-campaign visibility.

    Plan for upgrades. Some updates require upgrading client libraries and code, so teams may need to plan development time to fully benefit from v23.

    Google’s announcement. Announcing v23 of the Google Ads API

    When search performance improves but pipeline doesn’t

    When search performance improves but pipeline doesn’t

    Many search teams are seeing better rankings, more visibility, increased traffic, and more leads.

    Yet feedback on pipeline, revenue, and sales outcomes isn’t showing the same positive results.

    When SEO KPIs are green and graphs are up and to the right, business outcomes don’t always reflect the same success.

    Why strong search performance doesn’t translate to business outcomes

    Search performance can look healthy on the surface while breaking down in places search teams don’t own or fully see.

    It’s tempting to turn immediately to attribution models, data quality, or KPI definitions. 

    Ultimately, the issue is often how performance breaks down after the click – in areas search teams don’t own.

    While search work has become easier to scale with automation, software, established workflows, and frameworks, execution doesn’t equal understanding or deeper control. 

    This challenge has existed for more than 20 years and can be magnified by scale.

    Stopping analysis too early, or keeping it too shallow, limits understanding of performance in the broader context of the business or brand.

    In larger organizations, silos widen the gap. When CRM and sales aren’t tightly integrated with search, teams operate independently, with no one owning the full journey.

    Pressure from leadership can intensify the problem. 

    When results look good but fail to deliver at the bottom line, the lack of clarity becomes uncomfortable for everyone. This dynamic isn’t new, but it’s becoming more pronounced.

    To help address these disconnects, here are five breakpoints to focus on.

    1. Intent misalignment

    Intent is what search teams focus on when shaping the content, topics, and focus used to attract target audiences through search. That’s a given. 

    It doesn’t always match or map to deeper factors such as buying stage, urgency, or alignment with internal sales expectations at a given moment or season.

    If traffic is qualified by topic, keyword, or other search criteria, even when intent is aligned with the best available research and data, a prospect’s sales readiness and stage can still be missing or difficult to quantify.

    Analyzing what problem the searcher believed they were solving, and how closely that aligns with how sales positions the offering, can help close the gap between search and sales.

    That, in turn, allows teams to question whether they are optimizing for demand, curiosity, or another aspect of how someone enters the customer journey.

    Dig deeper: How to explain flat traffic when SEO is actually working

    2. Conversion friction

    When leads driven by search convert on the website, it can become an uncomfortable situation if they don’t ultimately become clients or customers, and sales has strong opinions about those conversions.

    There are many reasons for this friction. Technically, the leads pass the criteria outlined and agreed on within the organization or with an agency. 

    Problems often exist silently in another gap, sometimes categorized as conversion rate optimization or tied to brand, product development, or related areas. But that is often a distraction.

    When teams drill into lead specifics and qualification, the issues often come down to generic forms, CTAs that are not tightly aligned, or unclear next steps between form submission and an actual conversation.

    Conversions do not equal customers, or even a commitment to the sales process.

    Key questions center on the promise made in the search results, the website content the visitor consumed, and whether the landing page and site journey fulfilled the visitor’s intended goal.

    Most importantly, when evaluating performance, teams need to ask what signal a conversion actually sends to the organization, versus what the prospect intended.

    Dig deeper: 6 SEO tests to help improve traffic, engagement, and conversions

    Get the newsletter search marketers rely on.


    3. Lead qualification gaps

    Whether you operate within a company or environment, including agency and in-house teams, that uses lead scoring and qualification or not, ensuring that marketing-qualified leads are sales-ready is critical in a lead-focused business.

    This article is not intended to delve deeply into the differences between marketing-qualified and sales-qualified leads or into all the nuances involved. 

    However, the challenge cannot be overstated when teams lack shared understanding and definitions.

    That includes scoring models, definitions of what qualifies as “qualified,” who agreed to those definitions, and what happens when sales rejects leads.

    This may not be comfortable territory to navigate. 

    But reaching standard definitions and qualification criteria can be some of the most helpful and meaningful work teams do, because it helps prove the value of search.

    Dig deeper: How to monitor your website’s performance and SEO metrics

    4. Sales handoff and follow-up

    Yes, I’m sharing points, but this is the one that tends to hit the hardest and may be the most challenging. 

    That’s because you may be a C-level executive, manager, agency partner, or otherwise oversee or be directly involved in the marketing-to-sales handoff.

    We are adversaries, friends, and colleagues. I’m not here to revisit the fundamentals of marketing versus sales. But I’m here to challenge you.

    Speed, messaging, and context matter. This is not just about getting a form in front of someone as quickly as possible and whether they fill it out. 

    Substance and detail matter. Getting the right prospect with the right context, carried through from how they searched and found you, is critical.

    Yes, this is harder when analyzing customer journeys that involve LLMs and other sources, but that doesn’t mean teams can’t or shouldn’t try to understand that behavior.

    When a disconnect appears in this category, teams should push to understand whether sales knows why the lead came in, how quickly follow-up happened, and whether the messaging aligns with the original intent. These are key areas that help teams tune or adjust their strategies.

    Dig deeper: 9 things to do when SEO is great but sales and leads are terrible

    5. Measurement blind spots

    Sometimes everything appears to be in place. 

    Analytics shows conversions and search leads qualify, but there is no movement when reviewing CRM results. 

    Whether attribution becomes messy, impatience sets in, gray areas emerge, or other factors are at play, blind spots can form.

    This often leads teams to default to their own metrics. 

    No one wins when KPIs are not shared or when there is no single source of truth and trust.

    When visibility stops and ownership of “connecting the dots” is unclear, challenges emerge regardless of function, team, or leadership role. 

    Decisions then get made without full context.

    Dig deeper: Measuring what matters in a post-SEO world

    The cost of not knowing what’s working

    I’m not writing this article to be hard on search marketing leaders or practitioners. This is not a failure of search.

    If any of the challenges described here feel familiar, you are not alone, and they are likely cross-functional to solve.

    Marketing leaders do not need perfection when it comes to attribution or search efforts. That is not realistic. What is needed instead are better questions, shared definitions, and clear ownership.

    The biggest danger is not when performance drops, but when performance is strong and no one knows with confidence why.

    Scaling always involves risk, and teams should not scale efforts without conviction or a clear understanding of that risk. 

    Ultimately, the goal is for search work to build credibility, confidence, and influence beyond deep expertise in search engines and large language models tied to visibility.

    ❌