Normal view

Yesterday — 3 February 2026Main stream

Inspiring examples of responsible and realistic vibe coding for SEO

3 February 2026 at 21:21

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

TypeDescriptionTools
AI-assisted coding AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work.GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe codingPlatforms that handle everything except the prompt/idea. AI does most of the work.ChatGPT, Replit, Gemini, Google AI Studio
No-code platformsPlatforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream.Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

Are we ready for the agentic web?

3 February 2026 at 19:00
Are we ready for the agentic web?

Innovations are coming at marketers and consumers faster than before, raising the question: Are we actually ready for the agentic web?

To answer that question, it’s important to unpack a few supporting ones:

  • What’s the agentic web?
  • How can the agentic web be used?
  • What are the pros and cons of the agentic web?

It’s important to note that this article isn’t a mandate for AI skeptics to abandon the rational questions they have about the agentic web. 

Nor is it intended to place any judgment on how you, as a consumer or professional, engage with the agentic web.

LinkedIn poll on Copilot Checkout

With thoughts and feelings so divided on the agentic web, this article aims to provide clear insight into how to think about it in earnest, without the branding or marketing fluff.

Disclosure: I am a Microsoft employee and believe in the path Microsoft’s taking with the agentic web. However, this article will attempt to be as platform-agnostic as possible.

What’s the agentic web? 

The agentic web refers to sophisticated tools, or agents, trained on our preferences that act with our consent to accomplish time-consuming tasks.

In simple terms, when I use one-click checkout, I allow my saved payment information to be passed to the merchant’s accounts receivable systems. 

Neither the merchant nor I must write down all the details or be involved beyond consenting to send and receive payment.

For fun, I put this question to four different AI models, and the responses are telling: 

  • Copilot: “The agentic web is an emerging layer of the internet where AI agents can reason and take actions on behalf of people or organizations, turning expressed intent into outcomes while preserving user choice, transparency, and control.” 
  • Gemini: “The agentic web is an evolution of the internet where autonomous AI agents – rather than just human users – interact with websites, APIs, and each other to perform complex tasks and make decisions on our behalf. “
  • Perplexity: “The agentic web is an emerging phase of the internet where autonomous AI agents act as the primary interface, navigating, interpreting, and taking actions across websites and services on behalf of users with minimal human involvement.” 
  • Claude: “The agentic web is an evolution of the internet where AI agents autonomously navigate, interact with, and complete tasks across websites and services on behalf of users.” 

I begin with how different models answer the question because it’s important to understand that each one is trained on different information, and outcomes will inevitably vary.

It’s worth noting that with the same prompt, defining the agentic web in one sentence, three out of four models focus on diminishing the human role in navigating the web, while one makes a point to emphasize the significance of human involvement, preserving user choice, transparency, and control.

Two out of four refer to the agentic web as a layer or phase rather than an outright evolution of the web. 

This is likely where the sentiment divide on the agentic web stems from.

Some see it as a consent-driven layer designed to make life easier, while others see it as a behemoth that consumes content, critical thinking, and choice.

It’s noteworthy that one model, Gemini, calls out APIs as a means of communication in the agentic web. APIs are essentially libraries of information that can be referenced, or called, based on the task you are attempting to accomplish. 

This matters because APIs will become increasingly relevant in the agentic web, as saved preferences must be organized in ways that are easily understood and acted upon.

Defining the agentic web requires spending some time digging into two important protocols – ACP and UCP.

Dig deeper: AI agents in SEO: What you need to know

Agentic Commerce Protocol: Optimized for action inside conversational AI 

The Agentic Commerce Protocol, or ACP, is designed around a specific moment: when a user has already expressed intent and wants the AI to act.

The core idea behind ACP is simple. If a user tells an AI assistant to buy something, the assistant should be able to do so safely, transparently, and without forcing the user to leave the conversation to complete the transaction.

ACP enables this by standardizing how an AI agent can:

  • Access merchant product data.
  • Confirm availability and price.
  • Initiate checkout using delegated, revocable payment authorization.

The experience is intentionally streamlined. The user stays in the conversation. The AI handles the mechanics. The merchant still fulfills the order.

This approach is tightly aligned with conversational AI platforms, particularly environments where users are already asking questions, refining preferences, and making decisions in real time. It prioritizes speed, clarity, and minimal friction.

Universal Commerce Protocol: Built for discovery, comparison, and lifecycle commerce 

The Universal Commerce Protocol, or UCP, takes a broader view of agentic commerce.

Rather than focusing solely on checkout, UCP is designed to support the entire shopping journey on the agentic web, from discovery through post-purchase interactions. It provides a common language that allows AI agents to interact with commerce systems across different platforms, surfaces, and payment providers. 

That includes: 

  • Product discovery and comparison.
  • Cart creation and updates.
  • Checkout and payment handling.
  • Order tracking and support workflows.

UCP is designed with scale and interoperability in mind. It assumes users will encounter agentic shopping experiences in many places, not just within a single assistant, and that merchants will want to participate without locking themselves into a single AI platform.

It’s tempting to frame ACP and UCP as competing solutions. In practice, they address different moments of the same user journey.

ACP is typically strongest when intent is explicit and the user wants something done now. UCP is generally strongest when intent is still forming and discovery, comparison, and context matter.

So what’s the agentic web? Is it an army of autonomous bots acting on past preferences to shape future needs? Is it the web as we know it, with fewer steps driven by consent-based signals? Or is it something else entirely?

The frustrating answer is that the agentic web is still being defined by human behavior, so there’s no clear answer yet. However, we have the power to determine what form the agentic web takes. To better understand how to participate, we now move to how the agentic web can be used, along with the pros and cons.

Dig deeper: The Great Decoupling of search and the birth of the agentic web

How can the agentic web be used? 

Working from the common theme across all definitions, autonomous action, we can move to applications.

Elmer Boutin has written a thoughtful technical view on how schema will impact agentic web compatibility. Benjamin Wenner has explored how PPC management might evolve in a fully agentic web. Both are worth reading.

Here, I want to focus on consumer-facing applications of the agentic web and how to think about them in relation to the tasks you already perform today.

Here are five applications of the agentic web that are live today or in active development.

1. Intent-driven commerce  

A user states a goal, such as “Find me the best running shoes under $150,” and an agent handles discovery, comparison, and checkout without requiring the user to manually browse multiple sites. 

How it works 

Rather than returning a list of links, the agent interprets user intent, including budget, category, and preferences. 

It pulls structured product information from participating merchants, applies reasoning logic to compare options, and moves toward checkout only after explicit user confirmation. 

The agent operates on approved product data and defined rules, with clear handoffs that keep the user in control. 

Implications for consumers and professionals 

Reducing decision fatigue without removing choice is a clear benefit for consumers. For brands, this turns discovery into high-intent engagement rather than anonymous clicks with unclear attribution. 

Strategically, it shifts competition away from who shouts the loudest toward who provides the clearest and most trusted product signals to agents. These agents can act as trusted guides, offering consumers third-party verification that a merchant is as reliable as it claims to be.

2. Brand-owned AI assistants 

A brand deploys its own AI agent to answer questions, recommend products, and support customers using the brand’s data, tone, and business rules.

How it works 

The agent uses first-party information, such as product catalogs, policies, and FAQs. 

Guardrails define what it can say or do, preventing inferences that could lead to hallucinations. 

Responses are generated by retrieving and reasoning over approved context within the prompt.

Implications for consumers and professionals 

Customers get faster and more consistent responses. Brands retain voice, accountability, and ownership of the experience. 

Strategically, this allows companies to participate in the agentic web without ceding their identity to a platform or intermediary. It also enables participation in global commerce without relying on native speakers to verify language.

3. Autonomous task completion 

Users delegate outcomes rather than steps, such as “Prepare a weekly performance summary” or “Reorder inventory when stock is low.” 

How it works 

The agent breaks the goal into subtasks, determines which systems or tools are needed, and executes actions sequentially. It pauses when permissions or human approvals are required. 

These can be provided in bulk upfront or step by step. How this works ultimately depends on how the agent is built. 

Implications for consumers and marketers 

We’re used to treating AI like interns, relying on micromanaged task lists and detailed prompts. As agents become more sophisticated, it becomes possible to treat them more like senior employees, oriented around outcomes and process improvement. 

That makes it reasonable to ask an agent to identify action items in email or send templates in your voice when active engagement isn’t required. Human choice comes down to how much you delegate to agents versus how much you ask them to assist.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Get the newsletter search marketers rely on.


4. Agent-to-agent coordination and negotiation 

Agents communicate with other agents on behalf of people or organizations, such as a buyer agent comparing offers with multiple seller agents. 

How it works 

Agents exchange structured information, including pricing, availability, and constraints. 

They apply predefined rules, such as budgets or policies, and surface recommended outcomes for human approval. 

Implications for consumers and marketers 

Consumers may see faster and more transparent comparisons without needing to manually negotiate or cross-check options. 

For professionals, this introduces new efficiencies in areas like procurement, media buying, or logistics, where structured negotiation can occur at scale while humans retain oversight.

5. Continuous optimization over time 

Agents don’t just act once. They improve as they observe outcomes.

How it works 

After each action, the agent evaluates what happened, such as engagement, conversion, or satisfaction. It updates its internal weighting and applies those learnings to future decisions.

Why people should care 

Consumers experience increasingly relevant interactions over time without repeatedly restating preferences. 

Professionals gain systems that improve continuously, shifting optimization from one-off efforts to long-term, adaptive performance. 

What are the pros and cons of the agentic web? 

Life is a series of choices, and leaning into or away from the agentic web comes with clear pros and cons.

Pros of leaning into the agentic web 

The strongest argument for leaning into the agentic web is behavioral. People have already been trained to prioritize convenience over process. 

Saved payment methods, password managers, autofill, and one-click checkout normalized the idea that software can complete tasks on your behalf once trust is established.

Agentic experiences follow the same trajectory. Rather than requiring users to manually navigate systems, they interpret intent and reduce the number of steps needed to reach an outcome. 

Cons of leaning into the agentic web 

Many brands will need to rethink how their content, data, and experiences are structured so they can be interpreted by automated systems and humans. What works for visual scanning or brand storytelling doesn’t always map cleanly to machine-readable signals.

There’s also a legitimate risk of overoptimization. Designing primarily for AI ingestion can unintentionally degrade human usability or accessibility if not handled carefully. 

Dig deeper: The enterprise blueprint for winning visibility in AI search

Pros of leaning away from the agentic web 

Choosing to lean away from the agentic web can offer clarity of stance. There’s a visible segment of users skeptical of AI-mediated experiences, whether due to privacy concerns, automation fatigue, or a loss of human control. 

Aligning with that perspective can strengthen trust with audiences who value deliberate, hands-on interaction.

Cons of leaning away from the agentic web 

If agentic interfaces become a primary way people discover information, compare options, or complete tasks, opting out entirely may limit visibility or participation. 

The longer an organization waits to adapt, the more expensive and disruptive that transition can become.

What’s notable across the ecosystem is that agentic systems are increasingly designed to sit on top of existing infrastructure rather than replace it outright. 

Avoiding engagement with these patterns may not be sustainable over time. If interaction norms shift and systems aren’t prepared, the combination of technical debt and lost opportunity may be harder to overcome later.

Where the agentic web stands today

The agentic web is still taking form, shaped largely by how people choose to use it. Some organizations are already applying agentic systems to reduce friction and improve outcomes. Others are waiting for stronger trust signals and clearer consent models.

Either approach is valid. What matters is understanding how agentic systems work, where they add value, and how emerging protocols are shaping participation. That understanding is the foundation for deciding when, where, and how to engage with the agentic web.

7 digital PR secrets behind strong SEO performance

3 February 2026 at 18:00
7 digital PR secrets behind strong SEO performance

Digital PR is about to matter more than ever. Not because it’s fashionable, or because agencies have rebranded link building with a shinier label, but because the mechanics of search and discovery are changing. 

Brand mentions, earned media, and the wider PR ecosystem are now shaping how both search engines and large language models understand brands. That shift has serious implications for how SEO professionals should think about visibility, authority, and revenue.

At the same time, informational search traffic is shrinking. Fewer people are clicking through long blog posts written to target top-of-funnel keywords. 

The commercial value in search is consolidating around high-intent queries and the pages that serve them: product pages, category pages, and service pages. Digital PR sits right at the intersection of these changes.

What follows are seven practical, experience-led secrets that explain how digital PR actually works when it’s done well, and why it’s becoming one of the most important tools in SEOs’ toolkit.

Secret 1: Digital PR can be a direct sales activation channel

Digital PR is usually described as a link tactic, a brand play or, more recently, as a way to influence generative search and AI outputs.

All of that’s true. What’s often overlooked is that digital PR can also drive revenue directly.

When a brand appears in a relevant media publication, it’s effectively placing itself in front of buyers while they are already consuming related information.

This is not passive awareness. It’s targeted exposure during a moment of consideration.

Platforms like Google are exceptionally good at understanding user intent, interests and recency. Anyone who has looked at their Discover feed after researching a product category has seen this in action. 

Digital PR taps into the same behavioral reality. You are not broadcasting randomly. You are appearing where buyers already are.

Two things tend to happen when this is executed well.

  • If your site already ranks for a range of relevant queries, your brand gains additional recognition in nontransactional contexts. Readers see your name attached to a credible story or insight. That familiarity matters.
  • More importantly, that exposure drives brand search and direct clicks. Some readers click straight through from the article. Others search for your brand shortly after. In both cases, they enter your marketing funnel with a level of trust that generic search traffic rarely has.

This effect is driven by basic behavioral principles such as recency and familiarity. While it’s difficult to attribute cleanly in analytics, the commercial impact is very real. 

We see this most clearly in direct-to-consumer, finance, and health markets, where purchase cycles are active and intent is high.

Digital PR is not just about supporting sales. In the right conditions, it’s part of the sales engine.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Secret 2: The mere exposure effect is one of digital PR’s biggest advantages

One of the most consistent patterns in successful digital PR campaigns is repetition.

When a brand appears again and again in relevant media coverage, tied to the same themes, categories, or areas of expertise, it builds familiarity. 

That familiarity turns into trust, and trust turns into preference. This is known as the mere exposure effect, and it’s fundamental to how brands grow.

In practice, this often happens through syndicated coverage. A strong story picked up by regional or vertical publications can lead to dozens of mentions across different outlets. 

Historically, many SEOs undervalued this type of coverage because the links were not always unique or powerful on their own.

That misses the point.

What this repetition creates is a dense web of co-occurrences. Your brand name repeatedly appears alongside specific topics, products, or problems. This influences how people perceive you, but it also influences how machines understand you.

For search engines and large language models alike, frequency and consistency of association matter. 

An always-on digital PR approach, rather than sporadic big hits, is one of the fastest ways to increase both human and algorithmic familiarity with a brand.

Secret 3: Big campaigns come with big risk, so diversification matters

Large, creative digital PR campaigns are attractive. They are impressive, they generate internal excitement, and they often win industry praise. The problem is that they also concentrate risk.

A single large campaign can succeed spectacularly, or it can fail quietly. From an SEO perspective, many widely celebrated campaigns underperform because they do not generate the links or mentions that actually move rankings.

This happens for a simple reason. What marketers like is not always what journalists need.

Journalists are under pressure to publish quickly, attract attention, and stay relevant to their audience. 

If a campaign is clever but difficult to translate into a story, it will struggle. If all your budget’s tied up in one idea, you have no fallback.

A diversified digital PR strategy spreads investment across multiple smaller campaigns, reactive opportunities, and steady background activity. 

This increases the likelihood of consistent coverage and reduces dependence on any single idea working perfectly.

In digital PR, reliability often beats brilliance.

Dig deeper: How to build search visibility before demand exists

Get the newsletter search marketers rely on.


Secret 4: The journalist’s the customer

One of the most common mistakes in digital PR is forgetting who the gatekeeper is.

From a brand’s perspective, the goal might be links, mentions, or authority. 

From a journalist’s perspective, the goal is to write a story that interests readers and performs well. These goals overlap, but they are not the same.

The journalist decides whether your pitch lives or dies. In that sense, they are the customer.

Effective digital PR starts by understanding what makes a journalist’s job easier. 

That means providing clear angles, credible data, timely insights, and fast responses. Think about relevance before thinking about links.

When you help journalists do their job well, they reward you with exposure. 

That exposure carries weight in search engines and in the training data that informs AI systems. The exchange is simple: value for value.

Treat journalists as partners, not as distribution channels.

Secret 5: Product and category page links are where SEO value is created

Not all links are equal.

From an SEO standpoint, links to product, category, and core service pages are often far more valuable than links to blog content. Unfortunately, they are also the hardest links to acquire through traditional outreach.

This is where digital PR excels.

Because PR coverage is contextual and editorial, it allows links to be placed naturally within discussions of products, services, or markets. When done correctly, this directs authority to the pages that actually generate revenue.

As informational content becomes less central to organic traffic growth, this matters even more.

Ranking improvements on high-intent pages can have a disproportionate commercial impact.

A relatively small number of high-quality, relevant links can outperform a much larger volume of generic links pointed at top-of-funnel content.

Digital PR should be planned with these target pages in mind from the outset.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Secret 6: Entity lifting is now a core outcome of digital PR

Search engines have long made it clear that context matters. The text surrounding a link, and the way a brand is described, help define what that brand represents.

This has become even more important with the rise of large language models. These systems process information in chunks, extracting meaning from surrounding text rather than relying solely on links.

When your brand is mentioned repeatedly in connection with specific topics, products, or expertise, it strengthens your position as an entity in that space. This is what’s often referred to as entity lifting.

The effect goes beyond individual pages. Brands see ranking improvements for terms and categories that were not directly targeted, simply because their overall authority has increased. 

At the same time, AI systems are more likely to reference and summarize brands that are consistently described as relevant sources.

Digital PR is one of the most scalable ways to build this kind of contextual understanding around a brand.

Secret 7: Authority comes from relevant sources and relevant sections

Former Google engineer Jun Wu discusses this in his book “The Beauty of Mathematics in Computer Science,” explaining that authority emerges from being recognized as a source within specific informational hubs. 

In practical terms, this means that where you are mentioned matters as much as how big the site is.

A link or mention from a highly relevant section of a large publication can be more valuable than a generic mention on the homepage. For example, a targeted subfolder on a major media site can carry strong authority, even if the domain as a whole covers many subjects.

Effective digital PR focuses on two things: 

  • Publications that are closely aligned with your industry and sections.
  • Subfolders that are tightly connected to the topic you want to be known for.

This is how authority is built in a way that search engines and AI systems both recognize.

Dig deeper: The new SEO imperative: Building your brand

Where digital PR now fits in SEO

Digital PR is no longer a supporting act to SEO. It’s becoming central to how brands are discovered, understood, and trusted.

As informational traffic declines and high-intent competition intensifies, the brands that win will be those that combine relevance, repetition, and authority across earned media. 

Digital PR, done properly, delivers all three.

Why most SEO failures are organizational, not technical

3 February 2026 at 17:00
Why most SEO failures are organizational, not technical

I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.

The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.

No governance

The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission. 

When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.

I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.

Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.

Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.

Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.

Dig deeper: How to build an SEO-forward culture in enterprise organizations

Beware of drift

Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:

  • Navigation renamed by a new UX hire.
  • Wording adjusted by a new hire on the content team.
  • Templates adjusted for a marketing campaign.
  • Titles “cleaned up” by someone outside the SEO loop.

None of these changes look dangerous in isolation – if you know before they occur.

Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.

This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.

SEO loses power when it lives in the wrong place

I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.

Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.

I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.

When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.

  • Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
  • Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
  • Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.

Dig deeper: SEO stakeholders: Align teams and prove ROI like a pro

Positioning the SEO function

Without a seat at the right table, SEO becomes a cleanup function.

When one operational unit owns SEO, the work starts to reflect that unit’s incentives.

  • Under marketing, it becomes campaign-driven and short-term.
  • Under IT, it competes with infrastructure work and release stability.
  • Under product, it gets squeezed into roadmaps that prioritize features over discoverability.

The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.

In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.

Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.

Get the newsletter search marketers rely on.


Hiring mistakes

The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.

Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem

The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.

Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.

HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.

SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:

  • Migrations that wiped out templates.
  • Restructures that deleted category pages.
  • “Small” navigation changes that collapsed internal linking.

Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.

Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.

Governance needs to step in.

One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.

Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.

SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.

Dig deeper: The top 5 strategic SEO mistakes enterprises make (and how to avoid them)

When priorities pull in different directions

Every department in a large organization has legitimate goals.

  • Product wants momentum.
  • Engineering wants predictable releases.
  • Marketing wants campaign impact.
  • Legal wants risk reduction.

Each team can justify its decisions – and SEO still absorbs the cost.

I have seen simple structural improvements delayed because engineering was focused on a different initiative.

At one workplace, I was asked how much sales would increase if my changes were implemented.

I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.

Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.

Strong visibility governance can prevent that.

The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.

What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.

What improved outcomes was not a tool. It was governance: shared expectations and decision rights.

When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.

International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.

Ownership gaps

Many SEO problems trace back to ownership gaps that only become visible once performance declines.

  • Who owns the CMS templates?
  • Who defines metadata standards?
  • Who maintains structured data? Who approves content changes?

When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.

In contrast, the healthiest organizations I worked with shared one trait: clarity.

People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.

When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.

Dig deeper: How to win SEO allies and influence the brand guardians

Healthy environments for SEO to succeed

Across my career, the strongest results came from environments where SEO had:

  • Early involvement in upcoming changes.
  • Predictable collaboration with engineering.
  • Visibility into product goals.
  • Clear authority over content standards.
  • Stable templates and definitions.
  • A reliable escalation path when priorities conflicted.
  • Leaders who understood visibility as a long-term asset.

These organizations were not perfect. They were coherent.

People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.

What leaders can do now

If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:

  • Place SEO where it can see and influence decisions early.
  • Let SEO leaders – not HR – shape candidate shortlists.
  • Hire for judgment and influence, not presentation.
  • Create predictable access to product, engineering, content, analytics, and legal.
  • Stabilize page purpose and structural definitions.
  • Make the impact of changes visible before they ship.

These shifts do not require new software. They require decision clarity, discipline, and follow-through.

Visibility is an organizational outcome

SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.

The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.

When SEO performance falters, the most durable fixes usually start inside the organization.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Before yesterdayMain stream

Human experience optimization: Why experience now shapes search visibility

2 February 2026 at 19:00
Human experience optimization: Why experience now shapes search visibility

SEO has historically been an exercise in reverse-engineering algorithms. Keywords, links, technical compliance, repeat.

But that model is being reimagined. 

Today, visibility is earned through trust, usefulness, and experience, not just relevance signals or crawlability.

Search engines no longer evaluate pages in isolation. They observe how people interact with brands over time.

That shift has given rise to human experience optimization (HXO): the practice of optimizing how humans experience, trust, and act on your brand across search, content, product, and conversion touchpoints.

Rather than replacing SEO, HXO expands its scope to reflect how search now evaluates performance. Experience, engagement, and credibility have become difficult to separate from visibility itself.

Below, we’ll look at how HXO shows up in modern search, why it matters now, and how it reshapes the boundaries between SEO, UX, and conversion.

Why HXO matters now

Modern search engines reward outcomes, not tactics.

Ranking signals increasingly reflect what happens after the click, aligning with Google’s emphasis on user satisfaction over isolated page signals.

In practice, that means signals tied to questions like:

  • Do users engage or bounce?
  • Do they return?
  • Do they recognize the brand later?
  • Do they trust the information enough to act on it?

Visibility today is influenced by three overlapping forces:

  • User behavior signals: Engagement, satisfaction, repeat visits, and downstream actions all indicate whether content actually delivers value.
  • Brand signals: Recognition, authority, and trust – built over time, across channels – shape how search engines interpret credibility.
  • Content authenticity and experience: Pages that feel generic, automated, or disconnected from real expertise increasingly struggle to perform.

HXO emerges as a response to two compounding pressures:

  • AI-generated content saturation, which has made “good enough” content abundant and undifferentiated.
  • Declining marginal returns from traditional SEO tactics, especially when they aren’t supported by strong experience and brand coherence.

In short, optimization that ignores human experience is no longer competitive.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

The convergence: SEO, UX, and CRO are no longer separate

For a long time, SEO, UX, and CRO operated as separate disciplines:

  • SEO focused on traffic acquisition.
  • UX focused on usability and design.
  • CRO focused on conversion efficiency.

But that separation no longer works. 

Traffic alone doesn’t mean much if users don’t engage. Engagement without a clear path to action limits impact. And conversion is difficult to scale when trust hasn’t been established.

HXO now acts as the unifying layer:

  • SEO still determines how people arrive.
  • UX shapes whether they understand what they have found.
  • CRO influences whether that understanding turns into action.

That convergence is increasingly visible in how search-driven experiences perform. 

Page experience affects both visibility and post-click behavior. Search intent informs page structure and UX decisions alongside keyword targeting. Content clarity and credibility influence whether users engage once or return through search again.

In this environment, optimization is less about securing a single click. It’s about supporting attention and trust over time.

E-E-A-T is a business system, not content guidelines

One of the most persistent misconceptions in search is that E-E-A-T – or, experience, expertise, authoritativeness, and trustworthiness – can just be “added” to content.

Add an author bio. Add citations. Add credentials.

Those elements do matter. They help provide context and communicate expertise. But treating E-E-A-T primarily as a set of small, on-page additions doesn’t fully capture how search systems evaluate expertise and trust.

In practice, E-E-A-T isn’t just about how one page is formatted. It’s a broader, more holistic view of how a business demonstrates credibility to users over time. That tends to be an output of: 

  • Real expertise embedded in products and services.
  • Transparent operations and clearly stated values.
  • A consistent brand voice with visible accountability.
  • Clear ownership over ideas, opinions, and outcomes.

Search engines aren’t evaluating content in isolation. They’re evaluating the context around it, too.

Per Google’s Search Quality Rater Guidelines, that includes: 

  • Who is responsible for creating the content and whether that responsibility is clearly disclosed.
  • The demonstrated experience and reputation of the creator or organization behind it.
  • Consistency in expertise and accuracy across related content on the site.
  • Evidence of ongoing trust, including transparency, content updates, and accountability for accuracy.

Viewed this way, E-E-A-T is reinforced through consistent systems and patterns, not isolated page-level changes.

First-hand experience signals are the new differentiator

Today’s search landscape is flooded with competent, well-structured content that meets a similar baseline of accuracy and readability. “Good” content is no longer a meaningful bar.

Because of that, first-hand experience is becoming an increasingly important content differentiator. That can look like:

  • Original data, testing, or research generated by the creator.
  • Lived experience paired with a clear point of view.
  • Named creators with reputational stakes in what they publish.
  • Insight that reflects direct involvement, not secondhand synthesis.

There’s a meaningful difference between:

  • Information aggregation (what anyone could compile).
  • Experience-based insight (what only operators, practitioners, and creators can provide).

For example, a guide to subscription pricing that summarizes common models may be factually sound. But a piece written by someone who’s priced, tested, and revised subscription tiers over time is more likely to surface tradeoffs, edge cases, and decision logic.

That’s something aggregation can’t replicate.

This is why we’re seeing creators and operators increasingly outperform faceless brands. Within the world of human experience optimization, the “human” part is key.

Dig deeper: 4 SEO tips to elevate the user experience

Get the newsletter search marketers rely on.


Helpful content is a brand problem, not an SEO problem

“Helpful content updates” are often discussed as if performance issues stem from technical gaps or tactical mistakes.

In practice, when content fails to be helpful, the underlying causes tend to sit elsewhere.

Common patterns include:

  • A brand that lacks clarity about what it stands for or who it serves.
  • A business that avoids taking clear positions or making decisions visible.
  • An experience that feels fragmented across pages, channels, or touchpoints.

In contrast, content that users consistently find helpful usually reflects deeper alignment. It tends to emerge from:

  • A clear understanding of audience needs and decision contexts.
  • Real-world problem solving informed by actual experience.
  • Consistent intent across messaging, products, and interactions.

SEO can improve discoverability and structure, but it can’t compensate for unclear positioning or disconnected experiences. When helpfulness is missing, the issue is rarely confined to the page itself.

That view lines up with how Google described its helpful content system, which looks at broader site-level patterns and long-term value rather than isolated pages or tactics.

Closing these gaps requires a broader view of how people experience, trust, and engage with a brand beyond any single page. HXO provides a framework for that shift.

How to start practicing human experience optimization

Human experience optimization doesn’t begin with keywords. It begins with people and the situations that lead them to search in the first place.

In practice, adopting HXO usually involves a few shifts in focus:

1. Move from keyword strategy to audience strategy

Keyword research remains useful, but it’s rarely sufficient on its own. 

Teams need a clearer understanding of motivations, anxieties, and decision contexts, not just what terms people type into a search bar.

2. Audit experience, not just pages

Page-level audits often miss the broader experience users actually encounter. A more useful lens looks at:

  • Trust signals and credibility cues.
  • Clarity of message and intent.
  • Friction in user journeys.
  • Consistency across touchpoints and channels.

3. Align teams around experience outcomes

HXO tends to surface gaps between functions that operate independently. Addressing those gaps requires coordination across:

  • Marketing.
  • Product.
  • Content.
  • Design.

The goal isn’t alignment for its own sake, but shared responsibility for how users experience the brand.

4. Measure what actually matters

Traditional metrics still have a place, but they don’t tell the full story. Teams practicing HXO often expand measurement to include:

  • Engagement quality rather than raw volume.
  • Brand recall and recognition.
  • Return users over time.
  • Conversions driven by confidence and trust rather than pressure.

Optimize for humans, earn the algorithms

HXO isn’t a tactic to deploy or a framework to layer on. It reflects a longer-term advantage rooted in how consistently a brand shows up for users.

In modern search, the brands that perform most reliably tend to share a few traits:

  • They’re grounded in real experience.
  • They’re consistently useful.
  • They demonstrate expertise through action, not just explanation.

As a result, search visibility can’t be engineered through isolated optimizations. It’s shaped by the cumulative experiences people have with a brand before, during, and after a search interaction.

Ads in ChatGPT: Why behavior matters more than targeting

2 February 2026 at 18:00
Ads in ChatGPT- Why behavior matters more than targeting

Ads are now being tested in ChatGPT in the U.S., appearing for some users across different account types. For the first time, advertising is entering an AI answer environment – and that changes the rules for marketers.

We’ve used AI as part of ad creation or planning for years across Google, LinkedIn, and paid social. But placing ads inside an AI system that people trust to help them think, decide, and act is fundamentally different. This is not just another channel to plug into an existing media plan.

The biggest question is not targeting. It’s psychology. If advertisers simply replicate what works in search or social, performance will disappoint, and trust may suffer.

To succeed, brands need to understand how and why people use ChatGPT in the first place and what that means for attention, relevance, and the customer journey.

ChatGPT is a task environment, not a feed

People open ChatGPT to do something. That might be:

  • Solving a specific problem.
  • Refining a shortlist.
  • Planning a trip.
  • Writing something.
  • Making sense of a complex decision. 

This is very different from feed-based platforms, where people expect to scroll, be interrupted, and discover content passively.

In task-based environments like ChatGPT, behavior changes:

  • Goal shielding: Attention narrows to completing the task, filtering out anything that does not help progress.
  • Interruption aversion: Unexpected distractions feel more irritating when someone is focused.
  • Tunnel focus: Users prioritize clarity, speed, and momentum over exploration.

This is why clicks are likely to be harder to earn than many advertisers expect. If an ad does not help the user move forward with what they are trying to achieve, it will feel irrelevant, even if it is topically related.

Add to this the fact that trust in AI environments is still forming, and the tolerance for poor or interruptive advertising becomes even lower.

Dig deeper: OpenAI moves on ChatGPT ads with impression-based launch

When there are no search volumes, behavior becomes the strategy

For years, search volume has shaped how we plan.

Keywords told us what people wanted, how often they wanted it, and how competitive demand was. That logic underpinned both SEO and paid media strategy.

ChatGPT changes that.

People are not searching for keywords. They are outsourcing thinking. They describe situations, ask layered questions, and seek outcomes rather than information alone.

There is no query data to optimize against. Instead, success depends on understanding:

  • What job the user is trying to get done.
  • Which part of the journey they are choosing to outsource to AI.
  • What kind of help they need in that moment.

This is where behavioral insight replaces keyword demand as the strategic foundation.

From keyword intent to behavior mode targeting

Rather than planning around queries, advertisers need to plan around behavior modes, the mindset a user is in when they turn to ChatGPT. 

A useful way to think about this is:

  • Explore mode: The user is shaping a perspective or seeking inspiration.
  • Ads that work here help people start, offering ideas, options, or reframing the problem.
  • Reduce mode: The user is simplifying and narrowing choices. Effective ads reduce effort by clarifying differences and highlighting relevant trade-offs.
  • Confirm mode: The user is looking for reassurance. This is where trust matters most: proof, reviews, guarantees, and credible signals.
  • Act mode: The user wants to complete the task. Ads that remove friction perform best, clear pricing, availability, delivery, and next steps.

These modes closely mirror the human drivers we already recognize in search behavior: shaping perspective, informing, reassuring, and simplifying.

The difference is that ChatGPT compresses these moments into a single interface.

Dig deeper: What AI means for paid media, user behavior, and brand visibility

Get the newsletter search marketers rely on.


In ChatGPT, relevance is functional, not topical

A key shift advertisers need to internalize is that relevance in ChatGPT is not about being related. It is about being useful.

An ad can be perfectly aligned to a category and still fail if it does not help the user complete their task.

In a task environment, anything that creates extra work or pulls attention away from the goal feels like friction. This means the creative rules change.

High-performing ads are likely to behave less like traditional advertising and more like:

  • Tools.
  • Templates.
  • Guides.
  • Checklists.
  • Shortcuts.
  • Decision aids.

They fit into the flow of what the user is doing.

Generic brand ads, pure awareness messaging, and content that feels like a detour are likely to underperform.

Dig deeper: Your ads are dying: How to spot and stop creative fatigue before it tanks performance

Helpful content becomes the bridge across channels

The same assets that make a strong ChatGPT ad – practical guides, frameworks, calculators, explainers, and reassurance-led content – also do much more than support paid performance. 

They build authority for SEO and generative optimization, earn coverage and credibility through digital PR, and reinforce brand trust across social and owned channels.

This is where silos start to break performance.

Paid media teams cannot create “helpful ads” in isolation if SEO teams are working on authority, PR teams are building trust signals, and brand teams are shaping voice independently. In AI-led discovery, these signals converge.

The most effective ads may borrow from:

  • Brand voice for clarity and consistency.
  • Trusted voice through reviews, experts, or third-party validation.
  • Amplified voice via media coverage and recognizable authority.

The line between advertising, content, and credibility becomes increasingly blurred.

Measurement needs a reset

Judging ChatGPT ads purely on click-through rate risks missing their real impact.

In many cases, these ads may influence decisions without triggering an immediate click. They may help a brand enter a shortlist, feel safer, or be remembered when the user returns later through another channel.

More meaningful indicators may include:

  • Shortlist inclusion.
  • Brand recall.
  • Assisted conversions.
  • Branded search uplift.
  • Direct traffic uplift.
  • Downstream conversion lift.

This reinforces the need for teams to work more closely together. If performance is distributed across the journey, measurement and accountability must be too.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

The brands that win will understand behavior best

This is not simply a new ad format. We are looking at a behavioral shift.

The brands most likely to succeed will not be the ones that move fastest or spend the most. They will be the ones who understand:

  • What people actually use ChatGPT for.
  • Which moments of the journey are being outsourced to AI.
  • How to support those moments without breaking trust.

A practical starting point is returning to jobs-to-be-done thinking. Map the actions that happen before someone buys, inquires, or commits and identify where AI reduces effort, uncertainty, or complexity.

From there, the question becomes more powerful than “how do we advertise here?”:

How can we be genuinely helpful at the moment it matters?

That mindset will not only shape performance in ChatGPT, but across the wider future of AI-led discovery. And in that world, behavioral intent will matter far more than keywords ever did.

Advanced ways to use competitive research in SEO and AEO

2 February 2026 at 17:00
Advanced ways to use competitive research in SEO and AEO

Competitive research is a gold mine of insights in the world of organic discovery. Clients always love seeing insights about how they stack up against their rivals, and the insights are very easily translated into a multi-dimensional roadmap for getting traction on essential topics.

If you haven’t already done this, 2026 needs to be the year when you add competitive research from answer engine optimization (AEO) (I’ll use this acronym interchangeably with AI search) into your organic strategy – and not just because your executives or clients are clamoring for it (although I’m guessing they are).

This article breaks down the distinct roles of SEO and AEO competitive research, the tools used for each, and how to turn those insights into clear, actionable next steps.

SEO competitive research benefits vs. AEO competitive research benefits

Traditional SEO research is great for content planning and development that helps you address specific keywords, but that’s far from the whole organic picture in 2026.

Combined, SEO and AI competitive research can give you a clear strategy for positioning and messaging, content development, content reformatting, and even product marketing roadmapping. 

Let’s start with the tried-and-true tools of traditional SEO research. They excel at: 

  • Demand capture.
  • Keyword-driven intent mapping.
  • Late-funnel and transactional discovery.

A few years ago, pre-ChatGPT and the competitors that followed, SEO research was the foundation of your organic strategy. Today, those tools are a vital piece of organic strategy, but the emergence of AI search has shifted much of the focus away from traditional SEO. 

Now, SEO research should be used to:

  • Support AI visibility strategies.
  • Validate demand, not define strategy.
  • Identify content gaps that feed AI systems, not just SERPs.

AEO tools cover very different parts of the customer journey. These include:

  • Demand shaping.
  • Brand framing and recommendation bias.
  • Early- and mid-funnel decision influence.

AEO tools operate before the click, often replacing multiple SERP visits with a single synthesized answer. They offer a new type of research that’s a blend of voice-of-customer, competitive positioning, and market perception. That helps them deliver tremendous competitive insights into: 

  • Category leadership. 
  • Challenger brand visibility. 
  • Competitive positioning at the moment opinions are formed.

Let’s break this down a little further. Organic search experts can use insights from AI search tools to:

  • Identify feature expectations users assume are table stakes.
  • Spot emerging alternatives before they show up in keyword tools.
  • Understand where top products are or are not visible for relevant queries in key large language models (LLMs).
  • Understand why users are advised not to choose certain products.
  • Validate whether your product roadmap aligns with how the market is being explained to users.

Dig deeper: How to use competitive audits for AI SERP optimization

SEO vs. AEO research tools

Aside from adding AEO functionality (leaders here are Semrush and Ahrefs), SEO research tools essentially function in much the way they did a few years ago. Those tools, and their uses, include:

Ahrefs

Ahrefs is a great source of info for, among other things: 

  • Search traffic.
  • Paid traffic.
  • Trends over time.
  • Search engine ranking for keywords.
  • Topics and categories your competitors are writing content for.
  • Top pages.

I also like to use Ahrefs for a couple of more advanced initiatives: 

  • High-level batch analysis provides a fast overview of backlinks for any list of URLs you enter. This can give you ideas about outreach – or content written strategically to appeal to these outlets – for your backlinks strategy. 
  • Reverse-engineering a competitor’s FAQs allows you to see potentially important topics to address with your brand’s differentiators in mind.
    • To do this, go to Ahrefs’ Site Explorer, drop in a competitor domain, and then click on the Organic Keywords report. 
    • From here, you’ll want to filter out non-question keywords. The result is a good list of questions from actual users in your industry. You can then use these to tailor your content to meet potential customer needs.

Dig deeper: Link intent: How to combine great content with strategic outreach

BuzzSumo

BuzzSumo sends you alerts about where your competitors receive links from their public relations and outreach efforts. 

This is the same idea as the batch analysis, but it’s more real-time and gives you good insights into your competitors’ current priorities.

Semrush

Semrush is a super-useful tool for competitive research. 

You can use the domain versus domain tool to see what keywords competitors rank for with associated metrics. You can get insights on competitor keywords, ad copy, organic and paid listings, etc. 

Armed with all of this research, a fun content maneuver I like to suggest to clients is “[Client] vs. [Competitor]” pieces of content, particularly once they have some differentiators fleshed out to play up in their content. 

With this angle, I’ve gotten some great first-page rankings and reached users with buying intent.

Using their brand name might not always get you to rank above your competitor. Still, if you’re a challenger taking on bigger brands, it’s a good way to borrow their brand equity.

On the AEO side, I love tools with a heavy measurement component, but I also make a point of digging into the actual LLMs themselves, like ChatGPT and Google AI Mode, to combine reporting tools with source data.

This is similar to how my team has always approached traditional SEO research, which balances qualitative tools with extensive manual analysis of the actual SERPs.

Get the newsletter search marketers rely on.


The tools I recommend for heavy use are:

Profound

Profound is the most purpose-built AEO platform I’m using today. It focuses on how brands and competitors appear inside AI-generated answers, not just whether they rank in classic SERPs. Its insights help users: 

  • See which brands are cited or referenced in LLM answers for category-level and comparison queries.
  • Identify patterns in how competitors’ content is framed (e.g., default recommendation, alternative, warning, etc.). 
  • Understand which sources LLMs trust (e.g., documentation, reviews, forums, owned content).
  • Track share of voice within AI answers, not just blue links.

All of these insights help to move competitive research from the simple question of “who ranks” to the more important answer of “who is recommended and why.”

Ahrefs

Ahrefs remains a foundational tool for traditional SEO research, but its insights primarily reflect what ranks, not what gets synthesized or cited by AI systems.

They have, however, built in some new AI brand tracking tools worth exploring.

ChatGPT

ChatGPT is invaluable as a qualitative competitive research layer. I use it to: 

  • Simulate how users phrase early-stage and exploratory questions.
  • Compare how different competitors are summarized when asked things like: “What’s the best alternative to X?” or “Who should use X vs. Y?” 
  • Identify language, positioning, and feature emphases that consistently show up across responses. 
  • Test messaging.
  • Compare narratives with competitors.
  • Identify where your brand’s positioning is unclear or has gaps.

Google AI Mode

This tool is the clearest signal we have today of how AI Overviews will impact demand capture. It provides insight into: 

  • Which competitors are surfaced before any traditional ranking is visible. 
  • What sources Google synthesizes to build its answers.
  • How informational, commercial, and navigational queries blend. (This is especially important for mid-funnel queries where users previously clicked multiple results but now receive a single synthesized answer.)

Reddit Pro

This resource combines traditional community research with AI-era discovery. 

Because Reddit content is disproportionately represented in AI answers, this has become a first-class competitive intelligence source, not just a qualitative one. It helps to surface: 

  • High-signal conversations frequently referenced by LLMs. 
  • Common objections, alternatives, and feature gaps discussed by real users.
  • Language that actually resonates with people – and insight which often differs from keyword-driven copy.

Dig deeper: How to use advanced SEO competitor analysis to accelerate rankings & boost visibility

How to take action on your organic competitive research insights

Presenting competitive insights to clients or management teams in a digestible package is a good start (and may make its way up to the executive team for strategic planning). 

But where the rubber really meets the road is when you can make strong recommendations for how to use the insights you’ve gathered. 

Aim for takeaways like:

  • “[Competitor] is great at [X], so I suggest we target [Yy.”
  • “[Competitor] is less popular with [audience], which would likely engage with content on [topic].”
  • “[Competitor] is dominating AI search on topics I should own, so I recommend developing or refining our positioning and building a specific content strategy.”
  • “I’ve built a matrix showing the competitor product pages that draw more visibility in LLMs than our top-selling products. I recommend we focus on making those product pages more digestible for AI search and tracking progress. If we get traction, I recommend we identify the next tranche of product pages to optimize and proceed.” 

Ultimately, your clients or teammates should be able to use your insights to understand the market and align with you on priorities for initiatives to expand their footprint in both traditional and AI search. 

The in-house vs. agency debate misses the real paid media problem by Focus Pocus Media

2 February 2026 at 16:00

For years, conversations about paid media have revolved around one question: should companies build in-house teams or outsource to agencies?

That debate makes sense, but it misses the real issue. The problem isn’t where paid media sits in the org chart. It’s how performance leadership is structured.

Many companies run Google Ads and other paid channels with capable teams, solid budgets, and documented best practices. Campaigns are live. Dashboards are full. Optimizations happen on schedule. Yet:

  • Results stall. 
  • Pipelines flatten. 
  • Budgets get questioned. 
  • Confidence in paid advertising erodes.

This is rarely a talent issue. It’s usually a structural one.

The plateau most in-house teams eventually hit

Across dozens of B2B paid media accounts, from SaaS to service businesses spending five figures a month, we see the same pattern.

Performance does not collapse overnight. It slows gradually.

Campaigns keep running. Costs look stable. Leads still come in. But growth stalls. Leadership sees motion without insight. Decisions turn reactive. Paid media shifts from a growth engine to a cost center that has to defend its existence.

The gap isn’t effort or execution. Over time, strategy narrows when teams work in isolation.

Why ‘more headcount’ rarely fixes the problem

When performance stalls, the default response is to hire. A new specialist. A channel owner. A more senior role.

Extra resources can ease the workload, but headcount alone rarely fixes the real problem. 

In in-house teams, three challenges are consistent:

1. Tracking and leadership visibility

Leadership teams often lack a clear, shared view of how paid media drives pipeline and revenue. The data exists, but it’s scattered across disconnected platforms, tools, and dashboards. 

Without strong integrations, even well-run campaigns operate with weak feedback loops, limiting how much they can improve.

2. Structure and skill ceiling

Many teams try to follow proven best practices. The issue isn’t intent. It’s context. What works for one company or growth stage can be ineffective, or even harmful, for another. 

Without external benchmarks or fresh perspectives, teams struggle to see what actually applies to their business.

3. Lack of systematic testing

Day-to-day execution eats up available capacity. Teams focus on keeping things stable instead of pushing performance forward. Testing starts to feel risky, even though real gains usually come from the few experiments that work.

Over time, this creates the illusion of optimization: steady activity without meaningful progress.

The same mistake happens before ads ever launch

These structural issues don’t just affect companies already running paid media. They often show up earlier, before the first campaigns even launch.

In many B2B organizations, paid advertising enters the picture when growth from outbound sales, partnerships, or organic channels starts to slow. 

Budgets roll out cautiously. Execution gets delegated. Results are expected to emerge from platform defaults.

What’s usually missing is strategic ownership:

  • Clear definitions of success that go beyond surface-level metrics
  • Tracking that ties spend to pipeline, not just lead volume
  • A testing roadmap aligned with revenue goals

Without this foundation, early results disappoint. Budgets get cut. Confidence fades. Paid media gets labeled ineffective before it has a real chance to work.

Ironically, this early phase is where external perspective can deliver the greatest long-term impact. It’s also when companies are least likely to seek it.

The structural advantage of outsourced performance leadership

Outsourcing is often framed as a way to cut costs or add execution power. In reality, its biggest advantage is perspective.

External performance teams work across many accounts, industries, and growth stages. They:

  • Spot patterns earlier. 
  • Know when platform recommendations favor spend growth over business outcomes. 
  • Question assumptions internal teams may have stopped challenging.

That outside view matters most in areas like tracking architecture, platform integrations, and account structure, where partial best-practice adoption can quietly erode performance.

A common scenario looks like this: 

  • Teams follow platform guidance but leave underlying martech gaps unresolved. 
  • Systems don’t talk to each other. 
  • Optimization signals weaken. 
  • Budget efficiency drops, even though campaigns appear fully compliant.

When outsourcing actually works — and when it doesn’t

Outsourcing isn’t a cure-all. It breaks down when companies expect external partners to fix performance in isolation, or when strategy and execution live in separate worlds.

It works best as a hybrid model:

  • Internal teams own execution and business context
  • External experts bring strategic direction, structural resets, and ongoing challenge

In this setup, partners don’t replace teams. They raise the bar.

That’s why a specialized Google Ads agency creates the most value when the goal isn’t just running campaigns, but turning paid media back into a predictable, scalable growth lever.

A smarter model: External strategy, internal execution

High-performing organizations are increasingly separating strategy from execution volume.

They bring in outside expertise not because something is broken, but because they want:

  • Objective assessments of performance and structure.
  • Stronger attribution and tracking foundations.
  • Disciplined experimentation frameworks.
  • Clear accountability at the leadership level.

This approach builds momentum before budgets get cut, not after results decline. It also helps leadership understand why paid media performs the way it does, restoring confidence in the channel.

What high-performing companies do differently

Organizations that avoid long plateaus tend to:

  • Treat paid media as a system, not a standalone channel.
  • Invest early in clear tracking and strong integrations.
  • Invite external challenge before performance slips.
  • Accept that most tests will fail, knowing the few wins will compound.

In this context, outsourcing isn’t about cost efficiency. It’s about preserving strategic sharpness as platforms and markets evolve.

Final thought

The in-house versus outsourced debate reduces a deeper issue: who owns performance direction, and how often it gets challenged?

As paid media platforms automate and evolve, the companies that sustain growth aren’t the ones with the biggest teams. They’re the ones with the clearest perspective.

7 custom GPT ideas to automate SEO workflows

30 January 2026 at 18:00
7 custom GPT ideas to automate SEO workflows

Custom GPTs can help SEO teams move faster by turning repeatable tasks into structured workflows.

If you don’t have access to paid ChatGPT, you can still use these prompts as standalone references by copying them into your notes for future reuse. You will need to tweak them for your team’s specific use cases, because they are intended as a starting point.

Working with AI is largely trial and error. To get better at writing prompts, practice with small tasks first, iterate on prompts, and take notes on what gets you good outputs. 

AI also tends to ramble, so it helps to give strict guidelines for formatting and to specify what not to do. You can upload resources and articles to follow and provide clear context, such as defining the role and audience upfront.

The seven prompts below are designed to help you start building custom GPTs for planning, analysis, and ongoing SEO work.

1. Project plan GPT

Using past examples of project plans, create a GPT that will help you make a draft for this year’s focus areas.

How to set it up

  • Input project plans from previous years.
  • Give it a specific format to follow.
  • Consider how many items or sections to include.
  • Add specific details based on you or your team.
  • (Optional) Copy notes and feedback from your team or retrospective.

Example prompt

Based on last year’s project plan, make my project plan for this year. Here are the focus areas and problem areas to include.

Give me a bulleted list with the three most important items for me (or my team) to focus on for each quarter of this year. At least one item should cover link building.

Include a one-sentence summary of why you recommend each item and at least two KPIs to measure success.

[Insert last year’s plan.]

Now poke holes in your plan. Give me three reasons I should not focus on these items based on the risks. Include sources for your notes.

Dig deeper: How to use ChatGPT Tasks for SEO

2. Site performance GPT

Hook up your performance dashboards or custom GA reports to ChatGPT and let it do the initial legwork in identifying issues. Then make a list of items to investigate yourself.

How to set it up

  • Connect your reporting tools or upload reports directly.
  • Give specific direction for what to look for.
  • Include the cadence you want to look at, like a daily or weekly report.
  • Give examples of types of pages or categories to compare.

Example prompt

Here is the weekly site report. Give me your analysis of how the site performed compared to last week. Include a three-sentence summary of the sessions, conversions, and engagement.

List three wins and three misses in bullet format. Color-code each item based on how good or bad each item is.

[Insert report doc.]

3. Competitor analysis GPT

Check out what’s working and what’s not on competitor sites and get insights for yours. It’s most helpful to connect to a tool like Semrush or Ahrefs. 

How to set it up

  • Connect tools like Ahrefs or Semrush, or upload a report.
  • Identify competitors to analyze and top pages and folders.
  • List key metrics to compare.
  • Set up unique prompts for page, keyword/topic, folder, and domain-level comparison.
  • (Optional) Create documentation on identifying which metrics to dig deeper.

Example prompt

You are an SEO analyst performing competitor analysis to identify areas to improve your website. Check out these URLs and compare them. Give me a table with each URL in the rows and these columns: backlinks, average rank, top keyword, sessions, and estimated value.

Below that, give a two-sentence summary of who wins in each category and why. Use the criteria in this link to make your judgments, citing sources for each.

URL 1: 
URL 2: 
URL 3: 
Article reference:

Dig deeper: How to use advanced SEO competitor analysis to accelerate rankings & boost visibility

Get the newsletter search marketers rely on.


4. SERP analyzer GPT

AI has gotten much better over the last few months at analyzing images. Plug in SERP screenshots from your own searches and compare it to a web search from the GPT. Build this into a competitive SERP landscape analysis to see things like who appears in both searches vs. only one.

How to set it up

  • Identify search results and keywords to compare.
  • Take screenshots in incognito mode for comparison.

Example prompt

Do a web search for [your keyword here]. Show me what you are seeing in the search results.

Compare it with this screenshot and list the differences. Then include a bulleted list of what the results seen most often have in common.

Dig deeper: How to build a custom GPT to redefine keyword research

5. UX GPT

Turn your design or UX team’s resources into an easy-to-use helper. This is especially helpful for editorial teams that do not want to search through endless documentation for quick advice.

How to set it up

  • Upload your team’s documentation or your favorite UX articles.
  • Find pages with poor bounce or engagement stats.
  • Integrate the tool into standard page updates.

Example prompt

You are an SEO writer working on improving user engagement. Open this page. Check to make sure it follows all of our design rules.

List each violation, along with a source, explaining what is wrong and what to do instead. Then check to see whether there are any relevant page template patterns from the brand book that could apply to this type of page.

6. Tech SEO check GPT

Set up a daily or weekly tech SEO check to do the bulk of the analysis for you. 

How to set it up

  • Connect any tools like Google Search Console, or upload reports.
  • List the top metrics to check, like Core Web Vitals, page speed, and console errors.
  • Identify top pages to run a more comprehensive check.
  • Set up reminders to run it daily or weekly, or connect it to Slack to export results directly.

Example prompt

Based on the latest CWV report, identify problem pages that need a speed improvement audit. Create the list in a table, with the URLs in rows and columns for speed, issues identified, and suggested fixes. Make a separate list of pages that have improved, along with the actual scores.

Dig deeper: A technical SEO blueprint for GEO: Optimize for AI-powered search

7. Presentation GPT

While ChatGPT cannot directly create slides yet without an add-on or third-party connector, it can create the content for you to paste into your slides. Combine it with your performance, testing, tech SEO, and competitor GPTs for a well-rounded summary of overall site status with relevant context.

How to set it up

  • Gather data from your other GPTs.
  • Choose the ones to present.
  • (Optional) Upload past presentations for reference.

Example prompt

Pretend you are setting up a slide deck. The audience is other members of the SEO team. Format this summary from my Performance GPT into a slide.

Give me a header, subheader, and key bullets and takeaways. The tone should be straightforward but professional. Limit bullets to one line. Round all numbers to zero decimals. Suggest three examples of imagery and graphics to use.

[Insert summary.]

Dig deeper: How to balance speed and credibility in AI-assisted content creation

Where custom GPTs fit into day-to-day SEO work

Custom GPTs are most useful when they sit alongside the tools and processes SEO teams already use. Rather than replacing dashboards, audits, or documentation, they can handle first passes, surface patterns, and standardize how work gets reviewed before a human steps in.

Used this way, the prompts in this article are less about automation for its own sake and more about reducing friction in common SEO tasks, from planning and reporting to SERP analysis and technical checks.

Is SEO a brand channel or a performance channel? Now it’s both

30 January 2026 at 17:00
Is SEO a brand channel or a performance channel? Now it’s both

For a long time, SEO had the simplest math in marketing:

  • Rank higher → Get more traffic → Fill the sales pipeline

To the dissatisfaction of marketing executives, that linear world is breaking fast.

Between AI Overviews, zero-click SERPs, and users getting answers directly from LLMs, the old “rank to get traffic and leads” equation is failing. 

Today, holding a top keyword position often yields significantly fewer clicks than it did just two years ago.

This has forced many uncomfortable conversations in boardrooms. CMOs and CEOs are looking at traffic dashboards and asking tough questions, especially:

  • “If traffic is down… how do we know SEO is actually working?”

The answer forces us to confront a hard truth: The traffic model has collapsed, but executives still want measurable ROI. 

We have to stop treating SEO like a traffic faucet and start treating it like what it actually is: a brand-dependent performance channel.

Why traffic and pipeline are no longer in lockstep

Linear attribution has never fully captured the reality of organic search. 

ChatGPT is not replacing Google; rather, it is expanding its use. 

And that’s because users are skeptical of search and LLM results, so they need to validate the information they find on both platforms. 

In the past, the research loop happened inside Google’s ecosystem (clicking back and forth between results).

Today, organic search behaves like a pinball machine. Buyers bounce across channels and interfaces in ways that traditional attribution software cannot track. 

A user might find an answer in an AI Overview, verify it on Reddit, check a competitor comparison on G2, and finally convert days later via a direct visit.

This complexity has broken the correlation marketing executives are hungry for. 

In the past, if you overlaid traffic and pipeline charts, the lines moved together. Now, they often diverge.

Across B2B SaaS portfolios, I am seeing a consistent pattern:

  • Organic sessions are flat or declining year over year.
  • Rankings for high-intent terms remain stable.
  • Pipeline and inbound demos from organic search are going up.
Traffic flat, revenue up

Dig deeper: How to explain flat traffic when SEO is actually working

This divergence doesn’t mean SEO is failing. It means that traffic is no longer a reliable proxy for business impact.

The traffic being lost to zero-click searches is often informational and low-intent. The remaining traffic is higher-intent and closer to conversion. 

We are witnessing the “atomization” of search demand. 

As Kevin Indig notes in his analysis of The Great Decoupling, demand for short-head, broad keywords is in permanent decline. 

Users are either bypassing search entirely for AI interfaces, or they are refining their queries into specific, long-tail questions that have lower volume but significantly higher intent.

The “fat head” of search – the generic terms that used to drive massive vanity traffic – is being eaten by AI. The long tail is where the pipeline lives.

The mistake many leaders make is seeing the sessions drop and instinctively pushing to “get the numbers back up.” 

But chasing lost clicks usually leads to publishing broad, top-of-funnel content that inflates session counts (and other vanity metrics) without actually driving qualified leads.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Get the newsletter search marketers rely on.


SEO ROI is now the downstream outcome of brand traction

This is where the debate between “brand” and “performance” breaks down.

For a decade, SEO masqueraded as a pure performance channel. 

We convinced ourselves that if we just optimized the H1s and built enough backlinks, we could rank for anything. 

We treated brand awareness as a nice bonus, but not a prerequisite.

In reality, SEO has always been downstream of brand. AI interfaces are simply exposing that truth.

The rise of LLM-based search has flipped the script. These engines don’t just match keywords to pages; they synthesize reputation.

When an LLM constructs an answer, it is looking for verification across the entire web:

  • What do actual customers say on G2 and Reddit?
  • Is the brand cited in expert, non-affiliate content?
  • Is the product mentioned alongside category leaders?

You cannot brute-force these outcomes via SEO techniques.

If your brand lacks digital authority, no amount of technical optimization will save you. That is why I call this brand-conditioned performance.

It means that your brand strength sets the ceiling for your organic performance. You can no longer out-optimize a weak reputation. 

The search engines are looking for consensus across the web, and if the market doesn’t already associate your brand with the solution, the algorithm won’t recommend you.

So, what does brand strength actually mean to an LLM? In this new environment, brand strength is composed of four specific signals:

  • Topical authority: Do you own the complete conceptual map of your industry, or just a few disconnected keywords?
  • Ideal customer profile (ICP) alignment: Are you answering the specific, messy questions your actual buyers ask, or just publishing generic definitions?
  • Validation: Are you cited by the category-defining sources that LLMs use as training data?
  • Positioning clarity: Can an AI clearly summarize exactly what you do? As Indig points out, “Vague positioning gets skipped; sharp positioning gets cited.”

Bottom line: SEO doesn’t create demand out of thin air. It captures the demand your brand has already validated. 

Dig deeper: The new SEO imperative: Building your brand

The new defensibility metrics for SEO

When traffic stops being the headline KPI, leadership still needs proof that SEO is working. 

The strongest teams are pivoting to defensible signals that track revenue and reputation rather than just volume.

We need to anchor on metrics that prove business impact, even if top-of-funnel sessions are leaking:

  • Top-10 rankings for commercial and BOFU keywords remain stable. (You hold the ground where money changes hands).
  • Ahrefs traffic value increases, even if sessions decline. (You are trading high-volume informational traffic for high-value commercial traffic).
  • Product, solution, and comparison page traffic stabilizes. (Buyers are still finding your money pages).
  • Homepage traffic grows YoY. (The strongest proxy for brand demand).
  • LLM referral traffic emerges and accelerates. (The newest frontier. Tracking referral sources from ChatGPT, Gemini, or Perplexity indicates that you are part of the new conversation, even if the volume is currently low.)
  • Inbound demos and pipeline from organic growth relative to traffic.

That last point is the one that changes executive thinking.

When you show that pipeline per organic visitor is rising – even as sessions fall – the conversation shifts from “SEO is broken” to “SEO is evolving.”

Dig deeper: Why AI availability is the new battleground for brands

Modern SEO is moving from acquisition to influence

The most successful SEO teams are no longer asking, “How do we get the traffic back?”

They understand that the game has changed from acquisition to influence. 

They are asking:

  • How does our brand show up for buying questions?
  • How do we dominate consideration-stage queries?
  • How do we turn organic visibility into real buying influence?

They recognize that in an AI-first world, zero-click does not mean zero-value.

If a user sees your brand ranked first in an AI Overview, reads a snippet that positions you as the expert, and remembers you when they are ready to buy – SEO did its job.

SEO is no longer a hack for cheap traffic; it is the primary way brands condition the market to buy.

When search performance improves but pipeline doesn’t

29 January 2026 at 19:00
When search performance improves but pipeline doesn’t

Many search teams are seeing better rankings, more visibility, increased traffic, and more leads.

Yet feedback on pipeline, revenue, and sales outcomes isn’t showing the same positive results.

When SEO KPIs are green and graphs are up and to the right, business outcomes don’t always reflect the same success.

Why strong search performance doesn’t translate to business outcomes

Search performance can look healthy on the surface while breaking down in places search teams don’t own or fully see.

It’s tempting to turn immediately to attribution models, data quality, or KPI definitions. 

Ultimately, the issue is often how performance breaks down after the click – in areas search teams don’t own.

While search work has become easier to scale with automation, software, established workflows, and frameworks, execution doesn’t equal understanding or deeper control. 

This challenge has existed for more than 20 years and can be magnified by scale.

Stopping analysis too early, or keeping it too shallow, limits understanding of performance in the broader context of the business or brand.

In larger organizations, silos widen the gap. When CRM and sales aren’t tightly integrated with search, teams operate independently, with no one owning the full journey.

Pressure from leadership can intensify the problem. 

When results look good but fail to deliver at the bottom line, the lack of clarity becomes uncomfortable for everyone. This dynamic isn’t new, but it’s becoming more pronounced.

To help address these disconnects, here are five breakpoints to focus on.

1. Intent misalignment

Intent is what search teams focus on when shaping the content, topics, and focus used to attract target audiences through search. That’s a given. 

It doesn’t always match or map to deeper factors such as buying stage, urgency, or alignment with internal sales expectations at a given moment or season.

If traffic is qualified by topic, keyword, or other search criteria, even when intent is aligned with the best available research and data, a prospect’s sales readiness and stage can still be missing or difficult to quantify.

Analyzing what problem the searcher believed they were solving, and how closely that aligns with how sales positions the offering, can help close the gap between search and sales.

That, in turn, allows teams to question whether they are optimizing for demand, curiosity, or another aspect of how someone enters the customer journey.

Dig deeper: How to explain flat traffic when SEO is actually working

2. Conversion friction

When leads driven by search convert on the website, it can become an uncomfortable situation if they don’t ultimately become clients or customers, and sales has strong opinions about those conversions.

There are many reasons for this friction. Technically, the leads pass the criteria outlined and agreed on within the organization or with an agency. 

Problems often exist silently in another gap, sometimes categorized as conversion rate optimization or tied to brand, product development, or related areas. But that is often a distraction.

When teams drill into lead specifics and qualification, the issues often come down to generic forms, CTAs that are not tightly aligned, or unclear next steps between form submission and an actual conversation.

Conversions do not equal customers, or even a commitment to the sales process.

Key questions center on the promise made in the search results, the website content the visitor consumed, and whether the landing page and site journey fulfilled the visitor’s intended goal.

Most importantly, when evaluating performance, teams need to ask what signal a conversion actually sends to the organization, versus what the prospect intended.

Dig deeper: 6 SEO tests to help improve traffic, engagement, and conversions

Get the newsletter search marketers rely on.


3. Lead qualification gaps

Whether you operate within a company or environment, including agency and in-house teams, that uses lead scoring and qualification or not, ensuring that marketing-qualified leads are sales-ready is critical in a lead-focused business.

This article is not intended to delve deeply into the differences between marketing-qualified and sales-qualified leads or into all the nuances involved. 

However, the challenge cannot be overstated when teams lack shared understanding and definitions.

That includes scoring models, definitions of what qualifies as “qualified,” who agreed to those definitions, and what happens when sales rejects leads.

This may not be comfortable territory to navigate. 

But reaching standard definitions and qualification criteria can be some of the most helpful and meaningful work teams do, because it helps prove the value of search.

Dig deeper: How to monitor your website’s performance and SEO metrics

4. Sales handoff and follow-up

Yes, I’m sharing points, but this is the one that tends to hit the hardest and may be the most challenging. 

That’s because you may be a C-level executive, manager, agency partner, or otherwise oversee or be directly involved in the marketing-to-sales handoff.

We are adversaries, friends, and colleagues. I’m not here to revisit the fundamentals of marketing versus sales. But I’m here to challenge you.

Speed, messaging, and context matter. This is not just about getting a form in front of someone as quickly as possible and whether they fill it out. 

Substance and detail matter. Getting the right prospect with the right context, carried through from how they searched and found you, is critical.

Yes, this is harder when analyzing customer journeys that involve LLMs and other sources, but that doesn’t mean teams can’t or shouldn’t try to understand that behavior.

When a disconnect appears in this category, teams should push to understand whether sales knows why the lead came in, how quickly follow-up happened, and whether the messaging aligns with the original intent. These are key areas that help teams tune or adjust their strategies.

Dig deeper: 9 things to do when SEO is great but sales and leads are terrible

5. Measurement blind spots

Sometimes everything appears to be in place. 

Analytics shows conversions and search leads qualify, but there is no movement when reviewing CRM results. 

Whether attribution becomes messy, impatience sets in, gray areas emerge, or other factors are at play, blind spots can form.

This often leads teams to default to their own metrics. 

No one wins when KPIs are not shared or when there is no single source of truth and trust.

When visibility stops and ownership of “connecting the dots” is unclear, challenges emerge regardless of function, team, or leadership role. 

Decisions then get made without full context.

Dig deeper: Measuring what matters in a post-SEO world

The cost of not knowing what’s working

I’m not writing this article to be hard on search marketing leaders or practitioners. This is not a failure of search.

If any of the challenges described here feel familiar, you are not alone, and they are likely cross-functional to solve.

Marketing leaders do not need perfection when it comes to attribution or search efforts. That is not realistic. What is needed instead are better questions, shared definitions, and clear ownership.

The biggest danger is not when performance drops, but when performance is strong and no one knows with confidence why.

Scaling always involves risk, and teams should not scale efforts without conviction or a clear understanding of that risk. 

Ultimately, the goal is for search work to build credibility, confidence, and influence beyond deep expertise in search engines and large language models tied to visibility.

❌
❌