Normal view

Yesterday — 11 February 2026Search Engine Land

Why video is the canonical source of truth for AI and your brand’s best defense

11 February 2026 at 22:00
Video is the canonical source of truth for AI – and your brand's best defense

The Wild West of web scraping is changing, due in large part to OpenAI’s deal with Disney. The deal allows OpenAI to train on high-fidelity, human-verified cinematic content – intended to combat AI slop fatigue. 

This is how most of us feel when dealing with AI slop. Video production by Impolite. 

This deal opens up new opportunities to reinforce your brand’s visibility and recall. AI models are hungry for high-quality data, and this shift turns video into an essential asset for your brand.

Here’s a breakdown of why video is the new source of truth for AI and how you can use it to protect your brand’s identity.

How AI brand drift happens

When a large language model’s training set lacks data on a specific brand, the LLM doesn’t admit that it doesn’t know. Instead, it interpolates, filling the gaps in your brand’s story. It makes guesses about your brand identity based on patterns from similar brands or general industry information. 

This interpolation can lead to brand drift. Here’s what it looks like when an AI model narrates an inaccurate version of your business.

Say you represent a SaaS company. A user asks ChatGPT about one of your product’s features. But the model doesn’t have information about that specific feature.

So, the model constructs elaborate setup instructions, pricing tiers, and integration requirements for the phantom feature.

This has surfaced for companies like Streamer.bot, where users regularly arrive with confidently wrong instructions generated by ChatGPT – forcing teams to correct misinformation that the product never published. 

A Streamer.bot team member describing how AI-generated setup instructions regularly misrepresent product behavior, creating confusion and additional support burden.

AI brand drift happens to local businesses, too. As one restaurant owner told Futurism, Google AI Overviews repeatedly shared false information about both specials and menu items.

To correct brand drift and prevent AI from distorting your brand message, your company must provide a canonical source of truth.

Video as a source of truth

By producing authoritative videos (e.g., a demo that explicitly clarifies pricing), you provide strong semantic information through the transcript and visual proof. The video becomes the canonical source of truth that makes things clear, overriding opinions from Reddit and other sources.

In contrast, a text file contains low entropy. A statement like “50% off” is identical whether it was written in 2015 or 2025. Text often lacks the timestamp of reality, making it easy for AI to manipulate or lose the context of the real world.

To fix this, you need a medium with more data packed into every second. A five-minute video at 60 frames per second contains 18,000 frames of visual evidence, a nuanced audio track, and a text transcript.

Video enables LLMs to capture non-verbal, high-fidelity cues, creating a validation layer that preserves the visual evidence often flattened or lost in written content.

Creative studios like Berlin-based Impolite specialize in high-production-value video that provides the chaotic, non-repetitive entropy that AI needs to verify. The studio’s work for global brands serves as the high-density data source that prevents brand drift.

For example, Karman’s “The Space That Makes Us Human” project is a masterclass in creating a canonical source of truth, using high-fidelity, expert-led video to anchor brand identity.

Dig deeper: How to optimize video for AI-powered search

Authenticity as a signal

As deepfakes proliferate, authenticity is shifting from a vague moral concept to a hard technical signal. Search engines and AI agents need a way to verify the provenance.

Is this video real? Is it from the brand it claims to be?

For AI models, real-world human footage is the ultimate high-trust data source. It provides physical evidence, such as a person speaking, a product in motion, or a specific location. In contrast, AI-generated video often lacks the chaotic, non-repetitive entropy of real-world light and physics. 

The Coalition for Content Provenance and Authenticity (C2PA) is developing a new provenance standard to verify authenticity. The organization, which includes members such as Google, Adobe, Microsoft, and OpenAI, provides the technical specifications that enable this data to be cryptographically verifiable.

At the same time, the Content Authenticity Initiative (CAI), spearheaded by Adobe, drives the adoption of open-source tools for digital transparency.

Together, the two organizations go beyond simple watermarking. They allow brands to sign videos the moment they begin recording, providing a signal that AI models can prioritize over unverified noise.

How media verification works: From lens to screen

Ever notice that tiny “CR” mark in the corner of certain media on LinkedIn? This label stands for content credentials. It appears on images and videos to indicate their origin and whether the creator used AI to produce or edit them. 

When you click or hover over the “CR” icon on a LinkedIn post, a sidebar or pop-up appears that shows:

  • The creator: The name of the person or organization that produced the media
  • The tools used: Which software (e.g., Adobe Photoshop) the creator used to edit or generate the media
  • AI disclosure: A specific note if the content was generated with AI
  • The process: A history of edits made to the file to ensure the image hasn’t been deceptively altered

Some creators are already looking to circumvent the icon. Some have shared tips to hide the tag.

While some call it LinkedIn shaming, its presence signals authority. It’s also gaining traction. 

Google has begun integrating C2PA signals into search and ads to help enforce policies regarding misrepresentation and AI disclosure. The search giant has also updated its documentation to explain how C2PA metadata is handled in Google Images.

Dig deeper: The SEO shift you can’t ignore: Video is becoming source material

Get the newsletter search marketers rely on.


How verified media maintains its integrity

For content marketers, adopting C2PA is a defensive moat against misinformation and a proactive signal of quality.

If a bad actor deepfakes your CEO, the absence of your corporate cryptographic signature acts as a silent alarm. Platforms and AI agents will immediately detect that the content lacks a verified origin seal and de-prioritize it in favor of authenticated assets.

Here’s how it works in practice.

1. Capture: The hardware root of trust

Select Sony cameras use the brand’s camera authenticity solution to embed digital signatures in real time. The signature uses keys held in a secure hardware chipset. Sony uses 3D depth data alongside the C2PA manifest rather than a 2D screen or a projection to verify that a real 3D subject was filmed.

Similarly, select Qualcomm’s products support a cryptographic seal that proves the photo’s authenticity. In addition, apps like Truepic and ProofMode can sign footage on standard devices.

2. Edit: The editorial ledger

C2PA-aware software, such as Adobe Premiere Pro, integrates content credentials. This allows brands to embed a manifest listing the creator, edits, and software.

Think of it as a content ledger. Content credentials act as a digital paper trail, logging every hand that touches the file:

  • When an editor exports a video, the software preserves the original camera signature and appends a manifest of every cut and color grade.
  • If generative AI tools are used, relevant frames are tagged as AI-generated, preserving the integrity of the remaining human-verified footage.

3. Verify: Tamper-proof evidence in action

If the content is altered outside of a C2PA-compliant tool, the cryptographic link is severed.

When an AI model performs an evidence-weighting calculation to decide which information to show a user, it will see this broken signature.

Dig deeper: How to dominate video-driven SERPs

The expert content workflow

Information overload is constant nowadays. Traditional gatekeepers are struggling because AI generates content faster than humans can verify it. Authenticity becomes scarce online as Audiences increasingly seek out authenticity and strive to distinguish signal from noise.

From LLMs to search engines like Google, AI systems struggle with the same challenge. Verified subject matter experts (SMEs) are emerging as critical differentiators and as guarantors of credibility and pertinence.

An SME is a human anchor point of credibility for both humans and machines. When brands pair expertise with verifiable video documentation, they create something AI can’t replicate: authentic authority that audiences can see, hear, and trust.

Why expert video should be the source material 

Content repurposing engine

A video transcript of an expert explaining a complex topic often captures colloquial, nuanced details that polished, static blog posts miss. Here’s how to use expert-led videos as the starting point of your content flywheel: 

  • Text stream: Extract the transcript to create authoritative, long-form blogs, FAQs, and social captions. This provides the semantic foundation for text-based retrieval.
  • Visual stream: Pull high-quality frames for infographics and thumbnails. This provides visual proof that anchors the text.
  • Audio stream: Repurpose the audio for podcast distribution, capturing your expert’s tonal authority.
  • Discovery stream: Cut vertical TikTok and YouTube clips. These act as entry points that lead AI agents back to your canonical source.

By repurposing a single high-density video asset across these formats, you create a self-reinforcing loop of authority.

This increases the probability that an AI model will encounter and index your brand’s expertise in the format that the model prefers. For example, Gemini might index the video, while Perplexity might index the transcript.

It doesn’t have to be fancy, as this clip from Search with Sean shows:

What to look out for

Before you hit record, identify where your brand is most vulnerable to AI drift. To maximize the surface area for AI retrieval, proceed this way: 

  • Identify the gap: Where is AI hallucinating elements of your story? Find the topics where your brand voice is missing or being misrepresented by outdated Reddit posts or competitor noise.
  • Anchor with verified experts: Use real people with verifiable credentials. AI agents now cross-reference experts against LinkedIn data and professional knowledge graphs to weigh the authority of the content.
  • Preserve the nuance: Marketing and legal departments often strip it from blog posts, making them generic. Video preserves the colloquial, detailed explanations that signal true expertise. 

Here’s a concrete example recorded with Semrush’s Brand Control Quadrant framework:

Dig deeper: The future of SEO content is video – here’s why

Context still beats compliance

With infinite, low-cost AI slop cropping up, it’s going to get harder and harder to fight deepfakes. But it’s harder for an AI to hallucinate a real physical event than a sentence.

The most valuable asset a brand owns is its verifiable expertise. By anchoring your brand in expert-led, multimodal video, you ensure that your identity remains consistent, protected, and prioritized.

A clear hierarchy of data is emerging: high-fidelity, cryptographically signed video is the premium currency. For every other brand, the mandate is simple: Record reality. If you don’t provide a signed, high-density video record of your business, the AI will hallucinate one for you.

💾

A five-minute video provides more data for an LLM than most blog posts. Here’s how to maximize your brand’s surface area for AI retrieval.

Generative Engine Optimization: The Patterns Behind AI Visibility

11 February 2026 at 21:02
What is generative engine optimization (GEO)?

Generative engine optimization (GEO) is the practice of positioning your brand and content so that AI platforms like Google AI Overviews, ChatGPT, and Perplexity cite, recommend, or mention you when users search for answers.

If that sounds abstract, the results aren’t.

For bootstrapped form builder tool, Tally, ChatGPT became the #1 referral source.

They’re not alone. Across industries, the shift is already measurable.

ChatGPT reaches over 800 million weekly users. Google’s Gemini app has surpassed 750 million monthly users. And AI Overviews are appearing in at least 16% of all searches (significantly higher for comparison and high-intent queries). 

The question isn’t whether AI is changing discovery. It’s whether your brand is showing up when it happens.

So GEO is real. But is it stable enough to invest in seriously?

That’s a fair question. 

When we tracked 2,500 prompts across Google AI Mode and ChatGPT through the Semrush AI Visibility Index, the first thing we noticed was volatility. 

Between 40 and 60% of cited sources change from month to month.

But underneath the variances, patterns emerged. 

The brands showing up consistently shared specific structural characteristics. Entity clarity, content extractability, multi-platform presence made them easier for AI systems to find, trust, and reference.

In this guide, I’ll share what we’ve found about what GEO requires, how it differs from SEO, and the framework for increasing your visibility in AI-driven discovery.

What GEO Looks Like in Practice

GEO helps your brand appear in AI-generated answers.

For example, when someone asks an AI tool “What is the best whey protein powder for a mom in their 50s,” the response typically evaluates brands and recommends options based on ingredients, reviews, and credibility signals.

If your content or brand is included in that response, it’s an example of GEO in action.

Getting there requires coordinated effort across several areas:

  • Content strategy: Publishing information that AI systems can discover, understand, and extract for answers
  • Brand presence: Establishing your authority across platforms where AI tools pull information (not just your website)
  • Technical Optimization: Ensuring AI crawlers can access and process your content
  • Reputation Building: Earning mentions and associations that signal credibility to AI systems

These activities overlap with traditional SEO, but the emphasis shifts.

How GEO Differs from Traditional SEO

GEO builds on the same SEO fundamentals you already use. But it shifts the focus from rankings and clicks to how your brand is mentioned and cited inside AI-generated answers.

Here’s a snapshot of some key differences between GEO and traditional SEO:

What ChangesTraditional SEOGEO
Primary goalRank in top search positionsBe referenced or mentioned in AI answers
Success metricsRankings, clicks, trafficCitations, mentions, share of voice
How users find youClick through to your siteAI includes you in generated responses
Key platformsGoogle, BingGoogle AI Overviews and AI Mode, ChatGPT, Perplexity
How you optimize contentTitle tags, keywords, site speed, content qualitySelf-contained paragraphs, clear facts, structured data
How you build credibilityBacklinks, author credentials, reviews, domain authorityPositive mentions across trusted platforms and communities

Use this table to update your mental model. 

Traditional SEO fundamentals still matter. We’re just adapting how we apply them as AI systems change how people discover information.

Now, let’s break down what this means in practice.

What Stays the Same

The core principles behind effective SEO still apply to GEO.

You still need to publish high-quality, authoritative content for real users. Your site still needs to be technically accessible. You still need credible signals of trust and expertise. And you still need to understand user intent and deliver clear value.

AI systems tend to reference content that is authoritative, well-structured, and easy to interpret. Those are the same qualities that support strong SEO performance. 

If you already have a solid SEO foundation, GEO builds on it rather than replacing it.


Further reading: SEO vs. GEO, AEO, LLMO: What Marketers Need to Know


What Changes

Where GEO diverges is in how that foundation is applied.

1. Where You Need Presence

Traditional SEO focuses primarily on your owned properties, i.e. your website and blog.

GEO benefits from strategic presence across platforms where AI tools discover information, including:

  • Reddit threads where your target audience asks questions
  • YouTube videos demonstrating your expertise
  • Industry publications that establish your authority
  • Review sites where customers discuss solutions
  • Social platforms where conversations happen

2. How You Structure Information

AI systems extract specific passages from your content to construct answers. They pull a paragraph here, a statistic there, and weave them together.

This changes how you need to structure information. 

When you’re explaining a concept, defining a term, or sharing data, that paragraph should ideally work on its own. AI systems often extract these substantive passages without the conversational setup around them. (We’ll cover the mechanics of how this works in the strategic framework later.)

You need clear headings to help AI identify which section answers which question.

Also, putting answers early in sections may make them easier for AI to find and extract.

Traditional SEO often rewards comprehensive coverage. GEO places more emphasis on content that’s easy to extract and reassemble. We’re still learning exactly how different AI systems prioritize structure, but clarity consistently helps.

3. What You Measure

Traditional SEO metrics like rankings, clicks, and bounce rate tell part of the story.

GEO adds new measurements, like:

  • AI visibility score: A benchmark of how often and where your brand appears in AI-generated answers
  • Share of voice: Your visibility compared to competitors in AI responses
  • Sentiment: Whether mentions are positive, neutral, or negative
  • Context or prompt: What questions or topics trigger mentions of your brand
Semrush Enterprise AIO – Backlinko – AIO Overview

Together, these metrics help you understand not just whether you’re visible, but how your brand is being positioned inside AI-generated responses.

You need both traditional SEO metrics and AI visibility metrics to understand your full organic search presence in 2026.


Note: You can track these metrics using Semrush’s Enterprise AIO, which monitors your brand’s visibility across AI platforms like ChatGPT, Google AI Mode, and Perplexity. 

It provides granular tracking of mentions, sentiment, share of voice, and competitive benchmarking to help you optimize your AI visibility strategy.


5 Principles for AI Visibility: A Strategic Framework

An effective GEO strategy rests on five connected principles that work together to maximize your AI visibility.

(As AI systems evolve, specific patterns may shift, but these underlying principles provide a stable foundation.)

Each one addresses how AI systems discover, evaluate, and reference your brand.

Let’s look at them in detail.

1. SEO Fundamentals Are the Foundation

SEO fundamentals still matter for GEO, but for a different reason than in traditional search.

In AI-driven discovery, these fundamentals still function as optimization levers, but they influence retrieval, interpretation, and attribution rather than rankings alone. 

They create the baseline conditions that allow AI systems to retrieve information, interpret it accurately, and attribute it to a source with confidence.

For instance, AI-generated answers are assembled from content that is accessible, readable, and attributable. 

When accessibility, readability, or clear attribution are weak, even strong content becomes harder for AI systems to surface or reference reliably.

This is why many sources cited by AI platforms share characteristics long associated with solid SEO foundations. 

The overlap exists because clarity and reliability still matter across discovery systems, even as the surfaces change.

Technical accessibility plays a role here. 

Content that cannot be consistently crawled, indexed, or rendered introduces uncertainty at the retrieval layer. 

Page performance has a similar effect. Slower or unstable experiences don’t block inclusion outright. But they reduce how dependable a source appears when answers are assembled.

JavaScript-heavy implementations highlight this dynamic. 

Many AI crawlers still struggle to consistently process client-side rendered content, which can make core information harder to extract or interpret. 

When that happens, AI systems have less certainty about using the content as a reference point.

But technical setup is only part of the equation.

AI systems also assess content quality and credibility. Information that reflects real experience, clear expertise, and identifiable authorship is easier to contextualize and trust. 

Signals associated with E-E-A-T (Experience, Expertise, Authoritativeness, and Trust) influence not just whether content is referenced, but how it is framed within an answer.

Taken together, these foundations explain why SEO still underpins GEO. Not as a ranking system, but as the infrastructure that makes AI visibility possible.


Further reading: A technical SEO blueprint for GEO: Optimize for AI-powered search


2. Entity Clarity Shapes AI Understanding

Entities help AI systems understand and categorize information on the web. This includes distinguishing your brand from similar names, identifying what category you belong to, and understanding which topics you’re credible for.

AI systems don’t just read words. They interpret structure.

Before schema ever comes into play, they look for clear signals about:

  • What your brand is
  • What category it belongs to
  • What it offers
  • What it’s authoritative for

The most reliable way to provide those signals is through well-structured information.

If those signals are unclear or inconsistent, AI systems have less confidence when deciding whether and how to reference you.

Take monday.com as an example. When AI systems crawl websites and process information, they see “monday” mentioned in many different contexts. 

Clear, consistent descriptions across the site and supporting sources help AI understand that monday.com refers to project management software. Not the day of the week.

The same principle applies to category clarity. If you sell organic dog food, AI needs to categorize your brand under pet nutrition, not general groceries or pet accessories.

When someone asks “what’s the best grain-free dog food,” AI is more likely to consider brands it can clearly place in the correct category.

On a product page, it should be unambiguous what each element represents — the product name, the description, the price, the attributes, availability and variants.

That clarity needs to exist in the visible page content first. 

Schema markup can then mirror that structure in a machine-readable format (typically JSON-LD). And that same structured understanding should also be reflected in downstream systems, like your product feed submitted to Google Merchant Center.

In other words, the page structure, the schema markup, and the commerce feed should all describe the same thing in the same way.

The goal isn’t to “add schema.” The goal is to make your information logically structured so machines can consistently understand it across systems.

This is important because we don’t know how structured data is used inside large language models. Or how exactly schema influences training, retrieval, or real-time answer generation.

But we do know this: AI systems cross-reference signals from multiple sources and formats.

Your brand description on LinkedIn should align with what appears on your site. Profiles on Crunchbase, review platforms, or industry directories should reinforce the same category, positioning, and value proposition.

When these signals are consistent across sources, AI systems can categorize and reference your brand with greater confidence. When they conflict, confidence drops, and your brand is less likely to be mentioned.

This is why entity clarity isn’t just about a single markup tactic. It comes from designing your content and presence so machines can reliably understand who you are, what you offer, and where you belong wherever your brand appears.


Further reading: How Ecommerce Brands Actually Get Discovered In AI Search



Tip: You can check if your site has missing structured data that makes entity relationships unclear — along with other issues that could potentially be hurting your AI search visibility — using Semrush’s Site Audit.


3. Content Must Be Easy to Extract and Reuse

If entity clarity determines whether AI systems consider your content at all, extractability determines which specific parts get pulled into AI-generated answers.

This principle operates at the retrieval layer.

AI systems don’t consume pages the way humans do. When generating answers, they retrieve specific passages from across the web and assemble them into a response.

Here’s how it works mechanically:

LLMs break content into chunks, convert those chunks into numerical representations (vectors), and retrieve the most relevant passages when assembling an answer.

Those retrieved chunks are then synthesized into a response — often without the surrounding context from your original page.

This has practical implications. 

Based on what we’ve observed, passages that retain meaning when read in isolation are more likely to be retrieved and used accurately. Passages that rely on conversational setup or references like “as mentioned above” or “this is why” tend to lose clarity when extracted.

Now this may not apply to every paragraph on a page. 

But paragraphs that contain definitions, explanations, comparisons, or key facts should ideally stand on their own. These are the passages AI systems are most likely to extract without the surrounding narrative.

So what makes content extractable?

  • Self-contained paragraphs: Each paragraph expresses one complete idea that makes sense on its own, without vague references to surrounding text
  • Specific facts and statistics: Concrete numbers and clear statements are easier for AI to extract than vague generalizations
  • Clear, descriptive headings: Headings signal what each section covers, helping AI understand content organization
  • Front-loaded information: The main point appears at the start of paragraphs rather than at the end

One important distinction: This principle mainly applies to retrieval-augmented systems — like Google AI Mode and Perplexity with grounding, and ChatGPT with browsing enabled. These systems get content in real-time.

For base model knowledge (what the LLM learned during training), content structure is less important. That knowledge comes from training, not from retrieving per-query. Building presence in training data takes time and requires consistent, authoritative publishing.


Below is an example of self-contained content that AI systems can easily extract and reference.

  • It answers a single, well-defined question: which sources AI platforms rely on for finance-related queries
  • The main takeaway is stated immediately, without setup
  • Supporting context (platforms, percentages, category) is included within the same frame
  • The insight makes sense on its own, even if quoted or summarized elsewhere

The same extractability principle shows up in everyday writing as well.

For example, compare these two ways of explaining the same cooking technique:

Hard to extract: “There are several reasons this method works. After trying it, most people find their eggplant tastes better. That’s why many chefs use it.”

Easy to extract: “Salting eggplant for 15 minutes before cooking removes bitterness and excess moisture. This technique improves the final texture.”

Both explain the same idea. But the second version states the technique, timing, benefit, and result clearly, which makes it easy for AI to extract as a standalone passage.

Here are other examples:

When content is structured this way, AI systems can reliably retrieve relevant passages and include them in answers. 

Over time, that increases the likelihood that your expertise is surfaced accurately when users ask questions related to your domain.

4. AI Visibility Extends Beyond Your Website

AI systems don’t just pull from your website when building answers. They gather information from YouTube, Reddit, review sites, industry publications, social platforms, and more.

This creates two opportunities for visibility: 

(I) Your Owned Presence

Owned presence is content you or your team create on platforms beyond your website.

  • Your YouTube channel showing product features gives AI video content to reference
  • Your company’s participation in relevant subreddit discussions shows expertise in action
  • Your executives’ LinkedIn newsletters establish thought leadership

Podcasts, webinars, conference presentations, and educational platforms provide additional long-form content AI systems can extract from.

These platforms often play an important role in AI discovery.

In fact, Reddit, Linkedin, and YouTube were among the top cited sources by the top LLMs in October 2025.

When your brand creates valuable content on these platforms, you give AI systems more material to draw from.

But the key is creating substantive, helpful content that addresses real problems in your industry.

(II) Earned Mentions

Earned mentions are references to your brand that you don’t directly control.

  • Customer reviews on G2, Capterra, or Trustpilot describe real experiences with your product
  • Industry journalists mentioning your company in news articles provide third-party validation
  • Community discussions on Reddit or Quora where users recommend your solution show authentic sentiment. Like this:

When multiple independent sources discuss your brand in relevant contexts, AI systems have clearer signals to interpret your credibility.


Further reading: 7 ways to grow brand mentions, a key metric for AI Overviews visibility



Side note: Tools like Semrush’s AI PR Toolkit make this easier to evaluate at scale. Beyond counting earned mentions, it shows how your brand is framed across sources, including whether mentions skew positive, neutral, or negative. 

This metric can be very important as you work to extend brand visibility beyond your website. Because sentiment influences how AI systems frame your brand in answers, not just whether they mention you at all.


Why Both Matter

Owned presence and earned mentions work together.

Your owned content demonstrates expertise and provides detailed information AI can reference. Earned mentions from customers and industry sources validate your credibility.

When AI systems encounter both, they build a comprehensive understanding of what you offer.

This owned and earned content may also become part of LLM training data in the future, shaping how AI systems learn about and reference your brand long-term.

5. Visibility Is Measured Differently in AI Search

Traditional SEO metrics (like rankings, clicks, and traffic) only tell part of the story. But they had one major advantage: the attribution path was clear. 

A user clicked, landed on your site, and either converted or didn’t. You could tie that traffic directly to revenue.

AI search breaks that path. When an AI tool recommends your product to a user, they might never click through to your site. The conversion may still happen — they Google your brand name later, sign up the following week — but your analytics won’t connect it back to the AI mention that started it.

That’s the real measurement challenge. It’s not just that the metrics are different. It’s that the link between visibility and revenue becomes harder to trace.

The value here isn’t just the click. It’s being part of the answer.

This requires measuring your visibility differently.

Here are the key metrics to consider:

  • Citation frequency: This measures how often AI platforms mention your brand when answering questions
  • Share of voice: Your mention rate compared to competitors. If an AI answers 100 questions about “best CRM,” how many times do you appear vs. your rivals? This reveals your true competitive position.
  • Context tracking: Where do you appear? Understanding which specific prompts or topics trigger your brand mentions helps you identify the subjects you own versus where you’re invisible.
  • Sentiment: Are the mentions positive, neutral, or negative? A high share of voice means nothing if the AI is telling users your product is “overpriced” or “buggy.”

The challenge is that traditional analytics platforms (like GA4 or Google Search Console) cannot track these signals. They only see what happens after a click.

This creates a “measurement blind spot.” You might be the most mentioned brand in ChatGPT, but your standard dashboards would show zero activity.

Platforms like Semrush’s AI Visibility Toolkit are built to solve this specific problem. They help quantify these “invisible” GEO metrics, turning qualitative data (like sentiment and mention frequency) into trackable numbers.

Its Brand Performance report shows how visible your brand is in AI answers, how you compare to competitors, and whether mentions skew positive, neutral, or negative. 

The toolkit also highlights AI visibility insights, helping you understand how your brand is currently interpreted in AI answers and where adjustments may improve visibility.

Ultimately, a modern search strategy requires monitoring two distinct dashboards:

One for your website’s performance (rankings and traffic) in traditional search. And one for your brand’s mentions across AI search

You need both to see the full picture.

What This Framework Doesn’t Guarantee

These principles increase your probability of appearing in AI answers. They don’t guarantee it.

The volatility in AI citations means even well-optimized brands experience fluctuation. 

Different AI platforms weigh signals differently. User context and conversation history affect what gets cited. And AI systems are evolving rapidly — what works today may shift as models update.

Think of GEO like brand building: you’re increasing your odds across many moments of potential visibility, not securing a fixed position. 

The brands that do this well show up more often, more accurately, and in better context. But there’s no “rank #1” equivalent to chase.

That realism isn’t a reason to ignore GEO. It’s a reason to approach it as an ongoing discipline. Showing up consistently, across surfaces, over time, is how you build trust with AI systems.

Frequently Asked Questions

What’s the biggest misconception about GEO right now?

The biggest misconception is that AI-generated answers are too volatile to optimize for.

While individual responses change, the underlying inputs do not. AI systems consistently rely on durable signals like authority, clarity, and trust. Brands with strong entity clarity and credible sources appear repeatedly, even as surface-level outputs fluctuate. The patterns are stable enough to act on.

Is GEO replacing SEO?

No, GEO builds on SEO fundamentals.

Traditional SEO optimizes for rankings and clicks. GEO optimizes for mentions, citations, and recommendations inside AI-generated answers.

They work together. Strong SEO creates the foundation (technical accessibility, quality content, credibility signals) that AI systems rely on when deciding which brands to reference.

How should we think about GEO in the bigger AI search shift?

The clearest way to frame it is as a hierarchy.

  • AI search is the environment
  • AI SEO is the practice
  • AI visibility is the outcome

GEO sits inside AI SEO as one way to improve visibility within generative systems. The goal is not optimizing for a single model or interface. The goal is being seen, trusted, and reused wherever people search for answers.


Further reading: How to Rank in AI Search (New Strategy & Framework)


What types of content are more likely to appear in generative AI responses?

Content that is easy for AI systems to retrieve, understand, and reuse is most likely to appear in generative AI responses.

In practice, this means clear, direct answers to specific questions, self-contained explanations, fact-based comparisons, and concise definitions that make sense without surrounding context. AI systems tend to pull individual passages, not entire pages, so structure and clarity matter more than length.

Does AI search favor large, well-known brands, or does GEO level the playing field?

Well-known brands often start with more authority, but they don’t automatically win. Smaller publishers can compete when they own a clearly defined topic, show up consistently across platforms, and are easy for AI systems to understand and trust. 

In practice, focused niche sites may outperform larger brands when their expertise is clearer, better structured, and tightly aligned with specific audience needs.

What’s the right way to think about GEO moving forward?

The right way to think about GEO is as a long-term visibility discipline, not a short-term optimization tactic.

Success comes from making your expertise clear, consistent, and reusable wherever AI systems look for answers. That requires strong alignment across content, SEO, brand, PR, product, and customer touchpoints. 

AI search does not change the goal of helping users. It raises the standard for coherence, accuracy, and trust across the entire web.

Google previews WebMCP, a new protocol for AI agent interactions

11 February 2026 at 20:50
AI content crawlers

Google today announced an early preview of WebMCP, a new protocol that defines how AI agents interact with websites.

  • “WebMCP aims to provide a standard way for exposing structured tools, ensuring AI agents can perform actions on your side with increased speed, reliability, and precision,” wrote André Cipriani Bandarra from Google.

WebMCP lets developers tell large language models exactly what each button or link on a website does. WebMCP allows websites to explicitly publish a clear “Tool Contract” that defines available actions.

It runs on a new browser API, navigator.modelContext. Through that API, the website shares a structured list of tools — such as buyTicket(destination, date). The AI can then call those functions directly, making interactions faster, more accurate, and far more reliable.

Structured interactions for the agentic web. WebMCP introduces two new APIs that let browser agents act on a user’s behalf:

  • Declarative API: Handles standard actions defined directly in HTML forms.
  • Imperative API: Supports complex, dynamic interactions that require JavaScript execution.

These APIs act as a bridge, making your website agent-ready. They enable faster, more reliable agent workflows than raw DOM manipulation.

Use cases. Google shared use cases that show how an AI agent can handle complex tasks for your users with speed and confidence:

  • Travel: Users can get the exact flights they want. Agents can search, filter results, and complete bookings using structured data that delivers accurate results every time.
  • Customer support: Users can create detailed support tickets faster. Agents can automatically fill in the required technical details.
  • Ecommerce: Users can shop more efficiently. Agents can find products, configure options, and move through checkout with precision.

How to access the preview. You can apply for the preview to WebMCP here.

Why we care. Agentic experiences are shaping the future of search—and possibly SEO. Dan Petrovic called it the biggest shift in technical SEO since structured data. Glenn Gabe called this a big deal. It’s worth exploring these new protocols now.

Google outlines AI-powered, agent-driven future for shopping and ads in 2026

11 February 2026 at 20:25

Google is redesigning shopping and advertising around AI-powered, agent-driven experiences, and said speed and certainty will converge for consumers and brands in 2026.

In her third annual letter, Vidhya Srinivasan, Google’s VP and GM of Ads and Commerce, outlined how Search, YouTube, and its shopping infrastructure are being rebuilt for the agentic era — where AI doesn’t just surface information but actively assists, recommends, and completes transactions.

Key trends. Google is redefining commercial intent across Search, YouTube, and AI interfaces. Ads are moving deeper into conversational experiences like AI Mode, creative production is becoming AI-native, and checkout is embedding directly into Search. Here are key takeaways from Srinivasan’s letter:

  • Creators to commerce: YouTube remains a discovery hub, with creators serving as trusted tastemakers. AI helps match brands with the right creators, turning influence into measurable business impact.
  • Search ads evolve: As conversational and visual queries rise, AI Mode reimagines ads as part of the discovery journey. New formats (e.g., sponsored retail listings, Direct Offers), aim to help users find products and services while giving brands meaningful ways to convert interest into sales.
  • Agentic commerce arrives: Google is standardizing AI-driven shopping through the Universal Commerce Protocol (UCP), enabling consumers to browse, pay, and complete purchases seamlessly in AI Mode. Early rollouts include Etsy and Wayfair, with Shopify, Target, and Walmart to follow.
  • AI-powered creative and performance: Gemini 3 powers ad tools that automate creative production and campaign optimization. Generative tools like Nano Banana and Veo 3 help advertisers create studio-quality assets in minutes, while AI Max expands reach and drives performance.

Why we care. Adapting to AI-mediated commerce is increasingly necessary to stay competitive. Buying decisions are shifting — more often happening inside AI-driven search, creator content, and agent-powered checkout flows that could reshape traffic and conversion paths. These changes may create new ways to reach high-intent shoppers, but they also signal growing platform control over discovery, measurement, and transactions, potentially affecting competition, costs, and brand visibility.

Google’s blog post. What to expect in digital advertising and commerce in 2026

How AI-driven shopping discovery changes product page optimization

11 February 2026 at 20:00
How AI-driven shopping discovery changes product page optimization

As consumers lean into AI search, the industry has focused on the technical “how” – tracking everything from Agentic Commerce Protocols (ACP) to ChatGPT’s latest shopping research tools. In doing so, it often misses the larger shift: conversational search, which is changing how visibility is earned.

There’s a common argument that big brands will always win in AI. I disagree. When you move beyond the “best running shoes” shorthand and look at the deep context users now provide, the playing field levels. AI is trying to match user needs to specific solutions, and it’s up to your brand to provide the details.

This article explains how conversational search changes product discovery and what ecommerce teams need to update on product detail pages (PDPs) to remain visible in AI-driven shopping experiences.

How conversational search builds on semantic search

While semantic search is critical for understanding the meaning and context of words, conversational search is the ability to maintain a back-and-forth dialogue with a user over time.

Semantic search is the foundation for conversational visibility. Think of it like a restaurant: If semantic search is the chef who knows exactly what you mean by “something light,” conversational search is the waiter who remembers that you’re ordering for dinner.

FeatureSemantic searchConversational search
GoalTo understand intent and contextTo handle a flow of questions
How it thinksIt knows “car” and “automobile” are the same thingIt knows that when you say “how much is it?”, “it” refers to the car you just mentioned
The interactionSearching with a phrase instead of keywordsHaving a chat where the computer remembers what you were asking about before
ExampleAsking “What is a healthy meal?” and getting results for “nutritious recipes.”Asking “What is a healthy meal?” followed by “give me a recipe for that.”

AI blends them together. It uses semantic understanding to decode your complex intent and conversational logic to keep the thread of the story moving. For brands, this means your content has to be clear enough for the “chef” to interpret and consistent enough for the “waiter” to follow.

What conversational search and AI discovery mean for ecommerce

I recently shared how my mom was using ChatGPT to remodel her kitchen. She didn’t start by searching for “the best cabinets.” Instead, she leveraged ChatGPT as her pseudo-designer and contractor, using AI to solve specific problems.

Product discovery happened naturally through constraint-based queries:

  • “Find cabinets that fit these dimensions and match this specific wood type.”
  • “Are these cabinets easy for a DIY installation?”

Her conversations were piling up, allowing her to reach multiple solutions at once. Her discovery journey was layered. When ChatGPT recommended products to complete her tasks, she simply followed up with, “Where can I buy those?”

Brands and marketers need to stop optimizing for keywords and start optimizing for tasks. Identify the specific conversations where your product becomes the solution. If your data can’t answer the “Will this fit?” or “Is this easy?” questions, you won’t be part of the final recommendation.

“Recommend products” is the top task users trust AI to handle, highlighting a clear opportunity for brands, according to Tinuiti’s 2026 AI Trends Study. (Disclosure: I am the Sr. Director of AI SEO Innovation at Tinuiti.) 

For your brand to be the one recommended, your PDPs must provide the “ground truth” details these assistants need to make a confident selection.  

Dig deeper: How to make ecommerce product pages work in an AI-first world

What to do before you start changing every PDP

Step away from the keyword research tools and stop asking for “prompt volumes.” In an AI-driven world, intent is more important than volume. Before changing a single page, you need to understand the high-intent journeys your personas are actually taking.

To identify your high-intent semantic opportunities:

  • Audit your personas: Who is your buyer, and what are their non-negotiable questions? If you haven’t mapped these lately, start there.
  • Bridge the team gap: Talk to your product and sales teams. They know the specific attributes and “deal-breaker” details that actually drive conversions.
  • Listen to the market: Use sentiment analysis and social listening to find hidden use cases or brand problems. How are people actually using, or struggling with, your product in ways your brand team hasn’t considered?
  • Map constraints, not keywords: Identify the specific constraints (size, compatibility, budget) that AI agents use to filter recommendations.

How to build PDPs for AI search with decision support

Your PDP should operate like a product knowledge document and be optimized for natural language. This helps an AI system decide whether to recommend the product for a specific situation.

Name your ideal buyer and edge cases

Content should support better decision-making. Audit your PDPs to determine whether they provide enough detail on who the product is best for – and not for. Does the page explicitly name your ideal buyer, their skill level, lifestyle constraints, and deal-breakers?

AI shopping queries often include exclusions, and clearly outlining the important parts of your user search journey will help you understand where your products fit best.

Cover compatibility and product specifications

Compatibility feels synonymous with electronics (e.g., “Will my headphones connect to this computer?”). But think beyond one-to-one compatibility and expand into lifestyle compatibility:

  • Is this laptop bag waterproof enough for a 20-minute bike ride in the rain, and does it have a clip for a taillight?
  • Can I fit a Kindle and a book in this purse?
  • Will this detergent work with my HE washer?
  • Will this carry-on suitcase fit in the overhead compartment on every airline?
  • Is this “family-sized” cutting board actually small enough to fit inside a standard dishwasher?

People are searching for how products fit into their lifestyle needs. Highlight and emphasize the features that make your products compatible with their lifestyle.

Dig deeper: How to make products machine-readable for multimodal AI search

Get the newsletter search marketers rely on.


Provide vertical-specific product guidance

Breaking down your customer search journey and listening to your customers’ concerns, either through AI sentiment analysis, social listening, or product reviews, will help you understand what you need to be specific about.

  • Apparel brands should add sizing and fit guidance. Maybe you’re comparing your size 10 jeans to competitors’ sizing, or considering sizing changes based on the cut or style of your other jeans.
  • Beauty or skincare brands need ingredient combination details. Is this product compatible with other common formulas? Can I layer it over a vitamin C serum?
  • Toy brands could include important details for parents. Does your product need to be assembled, and how long will it take? Can they assemble it the night before Christmas?

If your biggest customer complaint is understanding when and how to use your products, you’re likely not making it easy enough for them to buy. Better defining your product attributes helps users and LLMs alike better understand your products.

Write for constraint matching instead of browsing

AI shopping discovery is driven by constraints instead of keywords. Shoppers aren’t asking for “the best laptop bag.” They’re asking for a bag that fits under an airplane seat, survives a rainy commute, and still looks professional in a meeting.

PDPs should be written to reflect that reality. Audit your product pages to see whether they answer common “Can I …?” and “Will this work if …?” questions in plain language. These details often live in reviews, FAQs, or support tickets, but rarely surface in core product copy where AI systems are most likely to pull from.

Here’s what transforming your content can look like:

Traditional PDP copy

  • Laptop backpack
    • Water-resistant polyester exterior.
    • Fits laptops up to 15″.
    • Multiple interior compartments.
    • Lightweight design.
    • USB charging port.

PDP copy written for constraints

  • Laptop backpack
    • Best for: Daily commuters, frequent flyers, and students who need to carry tech in unpredictable weather.
    • Not ideal for: Extended outdoor exposure or laptops larger than 15.6″.
    • Weather readiness: Water-resistant coating protects electronics during short walks or bike commutes in light rain, but is not designed for heavy downpours.
    • Travel compatibility: Fits comfortably under most airplane seats and in overhead bins on domestic flights.
    • Capacity and layout: Holds a 15-15.6″ laptop, charger, and tablet, with room for a book or light jacket – but not bulky items.
    • Lifestyle considerations: Integrated USB port supports charging on the go (power bank not included).

LLMs evaluate how well a product satisfies specific constraints in conversational queries or based on predetermined user preference information.

PDPs that clearly articulate those constraints are more likely to be selected, summarized, and recommended. This type of copy should also help your on-site customers better understand your products.

Dig deeper: Why ecommerce SEO audits fail – and what actually works in 30 days

Technical foundations still matter for ecommerce

Just because search platforms change doesn’t mean we should abandon everything we’ve learned in traditional optimization.

Technical SEO fundamentals still heavily apply in AI search:

  • Can crawlers access and index your site?
  • Are your product listing pages (PLPs) and PDPs clearly linked and structured?
  • Do pages load quickly enough for crawlers and users?
  • Is your most critical content accessible?

In conversational shopping, structured data is playing a different role than it did in traditional SEO strategies. In conversational shopping, it’s about verification. 

AI systems use your schema to validate facts before they risk reusing them in an answer. If the AI can’t verify your price, availability, or shipping details through a merchant feed or structured data, it won’t risk recommending you.

Variant clarity is just as important. When differences like size, color, or configuration aren’t clearly defined, AI systems may treat variants as separate products or merge them incorrectly. The result is inaccurate pricing, incompatible recommendations, or missed visibility.

Most importantly, structured data must match what’s visibly true on the page. When schema contradicts on-page content, AI systems avoid recommending uncertain information.

Dig deeper: How SEO leaders can explain agentic AI to ecommerce executives

Owning the digital shelf in 2026

Success on the digital shelf has moved beyond high-volume keywords. In this new era, your visibility depends on how well you satisfy the complex constraints users can provide in a single search. AI models are scanning your pages to see if you meet specific, nuanced requirements, like “gluten-free,” “easy to install,” or “fits a 30-inch window.”

The shift to conversational discovery means your product data must be ready to sustain a dialogue. The goal is simple: provide the density of information necessary for an AI to confidently transact on a user’s behalf. Those who build for these multi-layered journeys will own the future of discovery.

OpenAI details how ads will work in ChatGPT

11 February 2026 at 19:47
OpenAI ChatGPT iOS app

In a conversation on the OpenAI podcast, host Andrew Maine spoke with OpenAI executive Assad Awan, who detailed how ads will roll out in ChatGPT, who will see them and how the company plans to protect user trust.

Who will see ads:

  • Ads will appear for Free and Go tier users
  • Plus, Pro and Enterprise subscribers won’t see ads
  • Enterprise workspaces will remain fully ad-free

The guardrails: Awan emphasized that OpenAI is structuring ads around strict trust principles:

  • Separation: Ads are visually and technically separate from model answers
  • Privacy: Conversations aren’t shared with advertisers
  • Sensitive topics: Health, politics and other sensitive chats won’t show ads
  • Controls: Users can adjust or turn off personalization — or upgrade to remove ads

According to Awan, the model itself doesn’t know when ads are present and can’t reference them unless a user explicitly asks about one.

Zoom in. OpenAI internally prioritizes user trust over user value, advertiser value and revenue, Awan said — a framework meant to prevent ads from shaping how the model responds.

For small businesses. Awan described a future where AI acts as an advertising agent, helping small businesses run campaigns by describing goals in plain language rather than managing complex dashboards.

Why we care. ChatGPT ads could open a new, high-intent channel where businesses reach users during active conversations and decision-making moments. The platform’s focus on relevance, AI-driven matching and agent-style campaign tools could lower the barrier to entry for small and midsize advertisers while improving performance for larger brands.

If OpenAI succeeds in building a trusted ad environment, it may reshape how advertisers think about discovery and customer engagement in AI-driven interfaces.

What’s next. Early ad tests will be conservative, focusing on usefulness and relevance over volume as OpenAI refines formats and placement.

The big picture. Through advertising, OpenAI is aiming to scale ChatGPT access while maintaining a trust-first design — a balance the company says is central to its long-term strategy.

Dig deeper. Watch full interview with Assad Awan

Google Ads shows recommended experiments

11 February 2026 at 19:34
Trusting Google Ads AI

Google Ads is rolling out recommended experiments on the Experiments page, surfacing test ideas based on an account’s setup and performance data.

How it works: The platform suggests experiment opportunities — such as testing bidding strategies, creative variations, or new campaign features — and presents them directly inside the Experiments dashboard.

  • Each recommendation includes a preconfigured experiment setup
  • Advertisers can launch immediately or customize settings
  • Suggestions appear alongside the standard Create Experiment workflow

Why we care. By removing the need to build tests from scratch, Google is lowering the barrier to experimentation. Advertisers can act on optimization ideas faster and more consistently. However, advertisers should still ensure that the right tests/configurations are being launched to avoid wasted time and budget.

Zoom in. Example prompts include suggestions like enabling final URL expansion to improve campaign performance, displayed through in-dashboard popups tied to the Experiments interface.

The big picture. Google is increasingly embedding automated guidance into Ads workflows, nudging advertisers toward continuous testing and data-driven optimization.

First seen. This update was spotted by PPC News Feed owner, Hana Kobzová.

Google Ads simplifies product campaign tracking

11 February 2026 at 19:16
How to write high-performing Google Ads copy with generative AI

Google Ads rolled out a new feature that shows advertisers which campaigns their products are eligible for, directly in the Products section.

How it works. A new dashboard in the Products section includes:

  • A table showing product details, status, issues, and priority flags
  • A line graph summarizing campaign status trends
  • Filters to segment eligibility views
  • A pop-up panel that lists “Eligible” and “Not eligible” campaigns per product

Why we care. dvertisers can now quickly identify products that are missing from key campaigns or unintentionally overlapping across Shopping and Performance Max. The added visibility reduces the need to jump between campaign views to diagnose eligibility gaps.

The big picture: The changes help advertisers quickly identify products that aren’t running in expected campaigns, spot campaign overlap before it becomes a budget problem and save time troubleshooting product-level issues.

Between the lines. This is Google’s latest move to give advertisers more granular control over Shopping campaigns, where product-level optimization can make or break profitability.

When. Available now in Google Ads.

First seen. This update was spotted by PPC News Feed owner Hana Kobzová.

What 4 AI search experiments reveal about attribution and buying decisions

11 February 2026 at 19:00
What 4 AI search experiments reveal about attribution and buying decisions

AI search influence didn’t show up in our SEO reports or AI prompt tracking tools. It showed up in sales calls.

“Found you via Grok, actually,” a new lead said.

That comment stopped us cold. We hadn’t tried to rank in Grok. We weren’t tracking it. Yet it was influencing how buyers discovered and evaluated us.

That disconnect kept appearing in client conversations, too. Everyone was curious about AI search, but no one trusted the data. 

Teams wanted visibility in ChatGPT and other AI tools, then asked the same question: “Why invest in a channel that doesn’t show up cleanly in attribution?”

To answer that, we ran controlled experiments using assets we could fully control – an agency website, personal sites, an ecommerce brand, and purpose-built test domains.

The goal wasn’t to win AI rankings. It was to understand what still matters once AI enters the decision process:

  • Does AI search change what people buy, or just where brands appear?
  • Can something influence revenue without ever appearing in analytics?
  • Does AI recommendation affect performance across other channels?

Why we ran the experiments

Most AI search conversations fixate on visibility signals like brand mentions, citations, or visibility screenshots from AI prompt tracking tools.

Search has always had one job: help people make a decision.

We wanted to know if AI search performed the same job and actually changed commercial outcomes.

AI systems now operate at the stage where buyers compare options, shortlist providers, and reduce risk.

If AI mattered, it had to show up at the moment of decision.

On measurement limits: 

  • We didn’t rely on API data because API responses often differ from what real users see. Instead, we observed live interfaces across ChatGPT, Perplexity, Gemini, and Google AI Overviews. 
  • We used prompt tracking to spot patterns, not to declare absolute wins.

Experiment 1: Self-promotional ‘best of’ lists on your own website

A simple tactic became popular over the past year:

  • Create a “best X” list on your site.
  • Put yourself at the top.
  • Let AI systems pick up the list.

I’ve seen agencies do this locally and felt conflicted about it.

It wasn’t spam. But it relied on a blind spot – LLMs struggle to separate independent rankings from self-written ones.

Around the same time, Ahrefs published a large study that helped explain why this works. Glen Allsopp analyzed ChatGPT responses across hundreds of “best X”-style prompts and found that “best” list posts were the most commonly cited page type.

Two things from the study stood out:

  • Format: This included cases where brands ranked themselves first
  • Freshness: Most cited lists had been updated recently

I could have tested these observations on StudioHawk. Instead, I did it on my personal brand website to manage the risk. 

I published a list of the “Best SEO agencies in Sydney” and included my own website among the entries to test whether AI would “take the bait,” so to speak.

Within two weeks, LawrenceHitches.com appeared across AI tools for “best SEO agency Sydney” style searches:

Best SEO agencies - Sydney

The speed was surprising – traditional SEO rarely moves that fast.

If visibility appears this easily, then visibility alone can’t mean much, so I tested it again.

Experiment 2: Self-promotion of a fake business

Initially, I could have been piggybacking off the already established StudioHawk brand, so I decided to run a self-promotion test on a fake website

We used a basic landscaping site built only for SEO and AI testing and published the same type of page, a “best X” list.

This time, the topic was “best landscapers in Melbourne”:

Best landscapers in Melbourne

Within two weeks, the list appeared in AI responses again. The result repeated almost exactly.

If a brand-new test site can surface this fast, then “appeared in AI” doesn’t mean much on its own.

Visibility vs. trust

These two experiments showed one thing clearly: LLMs are still easy to influence at the surface level.

I ran these tests back in August 2025, but the same pattern still appears today.

A “best SEO agency Sydney” search run in January 2026 shows the same list-driven results:

Top SEO agencies Sydney

This creates a real conflict for brands.

On one side, the data says yes – the Ahrefs research shows “Best X” pages attract citations. Large brands like Shopify, Slack, and HubSpot publish self-ranked lists without obvious damage to rankings or AI visibility.

On the other side is buyer trust.

As Wil Reynolds put it, listing yourself first on your own site doesn’t build confidence with buyers. That’s the tension.

When bullish founders ask for the secret sauce to appear in ChatGPT, I’m blunt. List-based “best of X” pages that rank the author first have been a fast way to surface in some AI results.

That doesn’t work everywhere, and it’s unlikely to hold up long term.

Dig deeper: Google may be cracking down on self-promotional ‘best of’ listicles

If a landscaping site with no reputation can surface this quickly, then appearing in AI means very little on its own.

Why prompt tracking can’t be a success metric

A lot of money is flowing into AI prompt tracking tools. Clients ask for them constantly. We use them too, but with a clear warning.

I wouldn’t make major decisions based on screenshots or Reddit threads about where a brand appears in ChatGPT.

Brand overlap between API outputs and real user sessions was as low as 24%, according to recent research from Surfer SEO comparing tracking APIs with scraped user experiences.

That means three times out of four, what the API told you was happening wasn’t what the user was actually seeing.

If a brand can appear in a screenshot but disappear in a real user session, then appearance alone isn’t a metric.

We stopped asking if we showed up.

Instead, we started asking, “Did this change how buyers behaved?”

  • Did leads reference AI tools without prompting?
  • Did sales calls skip education?
  • Did the speed of buying change?
  • Did price resistance soften?

These signals were harder to collect.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Get the newsletter search marketers rely on.


Experiment 3: Kadi and the limits of digital PR alone

Kadi, an ecommerce brand we invested in that sells luggage, provided insight into our questions about whether AI results were affecting buyer behavior.

Running tests on Kadi has been an eye-opening experience for two reasons: 

  • It’s the difference between running an agency and running ecommerce.
  • It forced us to become our own client.

To move fast, we led with digital PR.

Kadi’s SEO foundation was solid but not perfect. We wanted to see how far off-site mentions could push SEO and AI visibility without heavy technical work or a polished site structure.

We conducted a large number of creative data campaigns and product placements, including:

  • Travel data studies: “Over-touristed destinations,” “Hidden fees,” “Best time to fly,” and “Happy Hour at 30,000 ft.”
  • Advisory pieces: “Airport cybersecurity” and “duty-free shopping” guides
  • Product and feature focus: “Kadi kids carry-on adventure,” “cloud check-in features,” and inclusions in “best suitcase round-ups.”
List of creative data campaigns and product placements

It worked:

  • Coverage landed.
  • Authority grew without the need for “traditional SEO.”
  • We saw temporary keyword spikes and traffic boosts.
Kadi - Digital PR efforts

But there was a catch: Digital PR alone wasn’t enough to close the gap with competitors. It created quick traction in search results, but it didn’t resolve the underlying issues.

After launch, SEO foundation work became the priority.

Then, Black Friday made the reality obvious. A customer found Kadi through ChatGPT on a “kids carry-on” query.

We saw this happen on the day of the query and showed the pathway: 

  • They didn’t buy immediately.
  • They checked the shipping policy.
  • They browsed the range.
  • They added three additional products.
  • They debated colour (olive over pink).
  • Attribution later showed Instagram as the source.

That order was the largest of the Black Friday period.

On paper, AI did nothing. In reality, it was part of shaping the decision. 

Digital PR can get you visibility spikes, but it doesn’t address the whole picture. 

While AI traffic does convert, the attribution is inconsistent.

Experiment 4: StudioHawk 

Across 2024 and 2025, StudioHawk underwent a full website rebrand and migration from WordPress to HubSpot.

Our own site sat at the bottom of the priority list for years. It was always the project we would get to later. 

Finally, we paused other priorities and rebuilt the entire site.

The work started in 2023, before terms like “GEO” existed. We were focused only on rebuilding service pages, social proof, and user experience end to end.

After launch, rankings improved and continue to grow.

Studiohawk post-rebrand performance

In 2025, SEO became the agency’s strongest channel by efficiency. It drove 65% of inbound leads and close to 60% of new revenue.

Agency's strongest channel by efficiency

Between July and December 2025, AI search leads began to appear more often:

AI search leads appered

Initially, these were “Oh, cool, we got a lead from AI” moments around the office.

Sales calls started skipping early education. New leads arrived aligned based on fit and expectations.

Over time, we saw that:

  • SEO inbound leads: Averaged 29 days to close.
  • AI search leads: Closed in roughly 18 days.

That 10-day gap mattered.

It meant less time educating, fewer scope objections, lower price sensitivity, and higher confidence earlier in the process.

Within the first year, AI-influenced conversations contributed over $100,000 in closed revenue from 20+ leads, including deals with direct attribution from tools like ChatGPT, Perplexity, and Grok.

The blind spot remains attribution paths such as Instagram, direct, or organic, where AI influenced the decision but didn’t appear in reporting (as seen in the Kadi example).

Where direct AI attribution existed, buyers were more prepared. That preparedness shortened sales cycles and lifted revenue.

AI compresses consideration

We started by asking where people would search next.

Our key finding? AI search doesn’t replace discovery. It compresses the consideration phase.

AI compresses consideration

Consideration is that messy middle where buyers reduce risk, shortlist vendors, compare tradeoffs, and ask, “Who should I trust?”

They answer these questions before a buyer ever clicks a link. 

It means your website no longer carries the full load – AI summaries and third-party mentions do the pre-selling for you.

This is the shift we now describe as the new consideration era.

As the map illustrates, we’ve moved from a straight funnel to a complex, AI-influenced pathway where consensus is key:

The new consideration era

Because this happens off-site, last-click attribution is broken. 

A buyer might use ChatGPT to create a shortlist but convert later via direct search.

Where traditional SEO still fits

Strong SEO metrics were a core across all our experiments, but we’ve stopped viewing them as the primary driver of value:

  • Keyword rankings confirm search engines understand your entity.
  • However, those high rankings don’t guarantee effective pre-selling.

Traditional SEO became a supporting signal – proof that the foundation is sound, rather than the end goal.

What this means for brands

After running a variety of AI search experiments, here’s what I think brands should focus on.

1. Measure where AI influence actually lands

Stop obsessing over prompt appearances (e.g., citations, mentions). These are shiny objects, but they fluctuate too easily. 

Instead, measure:

  • Sales velocity (Did deals close faster?)
  • Quality of the lead (Did they ask fewer questions to learn?)
  • Value per lead (Did price friction ease?)

2. Make clarity more important than creativity

AI hates vagueness. Making pages that make it clear what you do and who it’s for.

Pay attention to content that answers questions about risk, comparison, and price.

3. Change the content to help people decide what to buy

Focus on content that answers comparison, risk, and pricing questions. This makes a bigger difference than general category explanations.

4. Make entity consistency a crucial factor

Lack of consistency makes people doubt themselves. Conversely, consistency boosts confidence.

Check to see that your website, reviews, and digital PR all talk about your brand in the same way.

AI search compresses consideration, not discovery

In the end, the results were the same across all experiments. What we got from our sales pipeline was typical:

  • Clear intent.
  • Tight positioning.
  • Consistent signals of authority.

AI search isn’t replacing basic SEO. Instead, it shows weak positioning more quickly than traditional search ever did. 

What does that mean? 

Simply put, AI speeds up decisions that were already forming.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

How to reduce low-quality leads from Performance Max campaigns

11 February 2026 at 18:00
How to reduce low-quality leads from Performance Max campaigns

When left to its own devices, there are a couple of things Performance Max is absolutely great at doing for lead gen campaigns:

  • Driving volume.
  • Finding the lowest-quality leads it possibly can.

It’s not inherently surprising that Google is doing what’s best for Google – that is, lining its own pockets – by heavily optimizing toward the cheapest, path-of-least-resistance conversion events.

From experience with campaigns we inherit from new clients, this performance often catches brands off guard – especially those who take Google sales reps’ “helpful advice” at face value.

It can take time for those brands to look past PMax’s shiny, low CPAs and realize the truth: those leads do little to nothing for real pipeline or revenue.

However, Performance Max, when given the proper guardrails, can be a good source of incremental, quality leads – but the trick is in building those guardrails. 

This article covers lead quality tactics that work and how to execute them, tactics that don’t work, and important differences between Performance Max campaigns in Google and Bing. 

How to improve lead quality in PMax campaigns

These are the specific levers that consistently influence lead quality in Performance Max.

  • Use conversion goals focused on metrics that indicate a higher quality lead than just form fills.
    • Depending on your data density, this could mean closed-won leads, opportunities, or (if you need to go up the funnel to get enough volume) sales-qualified leads. 
    • It’s important to note that the effectiveness of this tactic depends on good offline conversion tracking implementation and a clean CRM instance, so don’t turn on PMax lead gen campaigns until you’re confident in your HubSpot or Salesforce integrity.
  • Use high-value lists for audience signals. This can be based on a certain activity, like “booked a meeting,” instead of simply including all converters.
  • Keep the focus on the right audiences. Exclude irrelevant ones and upload Customer Match lists to help Google’s algorithm find similar users.
  • Be smart with your campaign settings.
    • Use brand exclusions to ensure you’re not letting PMax cannibalize your brand traffic. 
    • Restrict your location targeting to high-performing geos.
    • Set strategic scheduling, such as excluding early-morning hours if those conversions tend to be lower quality.
    • Evaluate search themes and placements, and be aggressive about negative keywords and placement exclusions.
    • Use sitelinks to steer traffic to pages with full, detailed forms.
  • Refine the forms themselves.
    • Implement reCAPTCHA or honeypot fields in forms that keep bots from “converting.”
    • Use field validation:
      • Block disposable domains.
      • Block freemails.
    • Add freeform or disqualifying questions.
      • “How did you hear about us?”
      • “Do you have a budget for [solution]?”
      • “How many employees are in your organization?”

Dig deeper: Top Performance Max optimization tips for 2026

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Tactics that won’t affect lead quality

On the other hand, some of the usual campaign optimization strategies won’t do much to move the needle on PMax lead quality. If that’s your sole focus, you can de-prioritize:

  • Switching bid strategies (e.g., switching from Max Conversions to tCPA helps a little but doesn’t fix everything).
  • Adding more assets.
  • Adding more budget.
  • Asking Google support (something I’d just stay away from in general these days).

Get the newsletter search marketers rely on.


Important (and subtle) differences to know between Google and Bing PMax campaigns

Both Google and Bing have Performance Max campaigns, but there are differences in their offerings.

Google’s Performance Max network spans search, display, YouTube, discovery campaigns, and Gmail. It’s an absolutely huge amount of inventory – especially display and YouTube, which can be huge spam drivers if left unchecked.

Microsoft has far less video and display inventory. Their PMax campaigns primarily include Bing search, syndicated search, and the Microsoft audience network (which spans display, Outlook, and MSN). 

When comparing performance between the two, we haven’t seen any notable differences, but it’s worth monitoring updates to each platform’s reporting and inventory going forward.

Dig deeper: Google and Microsoft: How their Performance Max approaches align and diverge

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Performance Max isn’t broken, but it needs control

If you’re considering running PMax for lead gen, you should approach it with a healthy dose of skepticism. 

While PMax has been effective at driving scalable revenue for ecommerce,those campaigns need considerable guidelines to maintain lead quality. 

For instance, preventing a high-end shoe retailer from racking up tons of conversions on things like replacement laces and shoe polish will require that the campaign develop sufficient PMax guardrails.

Considering how Google is moving toward additional automation and AI in campaigns, it’s important to keep testing and experimenting to gain an understanding of the tools available to analyze and shape PMax campaigns. 

Google has issued some new releases to help as of late, including channel-level reporting, more options for exclusions, and campaign-level negative keywords.) 

There is a lead scale out there that can provide a healthy ROAS if you’re willing and able to wrestle the algorithm into submission. 

If you’ve tested Performance Max campaigns for lead gen but paused them once it was clear they weren’t driving revenue, do a quick post-mortem on your past efforts. You might find there’s room to whip Google into shape to do better this time around. 

Take note of the tactics you haven’t yet implemented and prioritize putting them in place before you waste another dollar of your 2026 budget on poor-quality leads that just junk up your CRM.

PPC mistakes that humble even experienced marketers

11 February 2026 at 17:00
Marketing mistakes

Every seasoned PPC pro carries a few scars — the kind you earn when a campaign launches too fast, an automation quietly runs wild, or a “small” setting you were sure you checked comes back to bite you.

At SMX Next, we had a candid, refreshingly honest conversation about the mistakes that still trip us up, no matter how long we’ve been in the game. I was joined by Greg Kohler, director of digital marketing at ServiceMaster Brands, and Susan Yen, PPC team lead at SearchLab Digital.

Read on to see the missteps that can humble even the most experienced search marketers.

Never launch campaigns on a Friday

This might be the most notorious mistake in PPC — and yet it keeps happening. Yen shared that campaigns often go live on Fridays, driven by client pressure and the excitement to move fast.

The risk is obvious. If something breaks over the weekend, you either won’t see it or you’ll spend Saturday and Sunday glued to your screen fixing it. One small slip — like setting a $100 daily budget instead of $10 — can burn through spend before anyone notices.

Kohler stressed the value of fresh eyes. Even if you build campaigns on Friday, wait until Monday to review and launch. Experience can breed overconfidence. You start to believe you won’t make mistakes — until a Friday launch proves otherwise.

The lesson: Don’t launch before holidays, before time off, or on Fridays. If clients push back, be the “annoying paid person” who says no. You’ll protect your sanity — and the campaign’s performance.

Location targeting disasters

Kohler shared a mishap where location targeting didn’t carry over correctly while copying campaigns in bulk through Google Ads Editor. By Saturday morning, those campaigns had already racked up 10,000 impressions — because the ads were running in Europe while the intended U.S. audience slept.

The lesson: Some settings, especially location targeting, are safer to configure directly in the Google Ads interface. There, you can explicitly set “United States only,” which reduces the risk of accidental international targeting.

The search term report trap

Yen made it clear: reviewing search term reports isn’t optional. It matters for every campaign type—standard search, Performance Max, and AI-driven campaigns included. Skip this step, and it looks like you’re chasing clicks instead of qualified traffic.

The real damage shows up months later. Explaining to a client where their budget went—when you could’ve caught irrelevant queries early—leads to uncomfortable conversations. Yen recommends reviewing search terms at least once a month. The time required is small compared to the spend it can save.

The lesson: Regular reviews also help you decide what to add as keywords and what to block as negatives. The goal is balance. Too many new keywords create cluttered accounts. Too many negatives often signal deeper issues with match types.

Google Ads Editor vs. interface: A constant battle

The conversation surfaced a familiar frustration: Google Ads Editor and the main interface don’t always play well together. Features roll out to the interface first, then slowly make their way to Editor, which creates gaps and surprises.

Yen explained that her team builds campaigns in Excel first, including character counts for ad copy, before uploading everything into Editor. Even so, they avoid setting most campaign configurations there. Instead, they rely on the interface to visually confirm that every setting is correct.

Kohler added that Editor shines for franchise accounts with dozens — or hundreds — of near-identical campaigns. It’s especially useful for spotting inconsistent settings at scale.

The lesson: For precision work like location targeting or building responsive display ads, the interface offers better control and clearer visibility.

The automatically created assets problem

Kohler called out automatically created assets as a major pain point. These settings default to “on,” and turning them off means clicking through multiple layers — assets, additional assets, then selecting a reason for disabling each one.

The frustration gets worse when Google introduces new automated asset types, like dynamic business names and logos, and automatically applies them to every existing campaign by default. For Kohler’s team, which manages 500 accounts per brand, that meant reopening every account just to turn off the new features.

The lesson: Set recurring calendar reminders to review these settings every few months. Google isn’t slowing down on automation, and most of it requires opting out.

Importing campaigns from Google to Microsoft Ads

Yen warned about the risks of importing Google campaigns into Microsoft Ads without a thorough review. The import tool feels convenient, but it often introduces real problems:

  • Budgets that make sense for Google’s volume can be far too high for Microsoft.
  • Automated bidding strategies don’t always translate correctly.
  • Imports default to recurring schedules instead of one-time transfers.
  • Smaller audience sizes demand different budget assumptions.

Kohler added that Microsoft Ads’ forced inclusion in the audience network makes things worse. Unlike Google, Microsoft doesn’t offer a simple opt-out from display. Advertisers must manually exclude placements as they surface, or work directly with Microsoft support for brands with legitimate placement concerns.

The lesson: import once to get a starting point, then stop. Treat Microsoft Ads as its own platform, with its own strategy, budgets, and ongoing optimization.

The App placement nightmare

Audience member Jason Lucas shared a painful lesson about forgetting to turn off app audiences for B2B display campaigns. The result was a flood of spend on “Candy Crush” views — completely irrelevant for business marketing.

Yen confirmed this is a common problem, made worse by how well Google hides the settings. To exclude all apps in the interface, advertisers must manually enter mobile app category code 69500 in the app categories section. In Editor, it’s easier — you can exclude all apps in one move.

Kohler added another familiar mistake: forgetting to exclude kids’ YouTube channels. His brands have accidentally spent so much on the Ryan’ World YouTube channel that they joke about helping fund the kid’s college tuition.

The lesson: Build a blanket exclusion list that covers apps, kids’ content, and inappropriate placements, then apply it to every campaign — no exceptions.

Content exclusions and placement control

Beyond app exclusions, the group stressed the need for comprehensive content exclusions across every campaign. Their advice is to apply these exclusions at launch, then review placement reports a few weeks later to catch anything that slips through.

The lesson: Consistency. Even when exclusions are in place, Google doesn’t always honor them. That makes regular placement monitoring essential. Automation can ignore manual rules, so verification is still the only real safeguard.

Call tracking quality issues

When the conversation turned to call tracking, Yen stressed the need for consistent client communication. Many businesses lack a CRM or close alignment with their sales teams, making it hard to evaluate call quality.

The lesson: Hold monthly check-ins that focus specifically on call quality, Yen said. If calls aren’t converting, the problem may be what happens after the phone rings, not marketing.

Kohler added a technical tip for CallRail users. Separate first-time callers from repeat callers in your conversion setup. Send both into Google Ads, but mark return calls as secondary conversions. That way, automated bidding doesn’t optimize for repeat callers the same way it does for new prospects.

The promo date problem

Litner flagged ongoing frustration with scheduled headline assets appearing outside their intended dates, especially for time-sensitive promotions. Although the issue now seems resolved, he still double-checks at both the start and end of each promotional period.

Kohler reported similar problems with automated rules. Scheduled rules sometimes don’t run at all or trigger a day early, which can pause campaigns too soon or activate them late.

The lesson: If you schedule a launch for a specific day, verify it manually that day. Don’t rely on automation alone.

AI Max settings and control

The conversation also touched on Google’s AI Max campaigns. Chad pointed out that all AI Max settings default to “on,” with no bulk way to disable them. The only option is digging into individual campaigns and ad groups.

Kohler suggested checking Google Ads Editor for workarounds. In some cases, Editor makes it easier to control settings like landing page expansion across multiple ad groups at once.

The lesson: While AI Max and Performance Max have improved, Yen noted they still demand close monitoring and manual exclusions to avoid wasted spend.

Account-level settings that haunt you

Yen called out an easy-to-miss issue: account-level auto-apply settings that don’t play nicely with AI Max and Performance Max campaigns. These controls live in three different places in the interface, which makes them easy to overlook unless you’re checking deliberately.

The lesson: Build a standard checklist of account-level settings and run through it whenever you touch a new account or launch automated campaign types.

Final wisdom

Several themes kept surfacing throughout the discussion:

  • Trust issues with ad platforms are justified, so verify everything.
  • Fresh eyes catch mistakes that familiarity glosses over.
  • Clear client communication prevents misplaced blame when performance slips.
  • Manual checks still matter, even as automation expands.
  • Well-maintained exclusion lists prevent repeat problems.
  • Google Ads Editor and the interface serve different roles, so use each for what it does best.

The bigger message: Mistakes happen to everyone, no matter how experienced you are. The real difference between novices and experts isn’t avoiding errors — it’s catching them fast, learning from them, and building systems so they don’t happen again.

As Kohler put it, these platforms will eventually humble everyone. The key is staying alert, questioning automation, and never launching campaigns on Fridays.

Watch: PPC mistakes I’ve made

💾

From Friday launches to sloppy imports, PPC veterans share hard-earned lessons on automation traps, Google Ads Editor quirks, and more.
Before yesterdaySearch Engine Land

Google pushes AI Max tool with in-app ads

10 February 2026 at 21:44
Google vs. AI systems visitors

Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.

What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.

  • The notifications appear during routine account audits and updates.
  • It essentially serves as an internal advertisement for Google’s own tooling.

Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.

What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.

First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”

Bing Webmaster Tools officially adds AI Performance report

10 February 2026 at 21:34

Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.

  • AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
  • Search Engine Land first reported on Jan. 27 that Microsoft was testing the AI Performance report.

What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.

  • Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.

What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:

What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:

  • Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
  • Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
  • Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
  • Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
  • Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.

These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.

Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.

How to use it. Microsoft said publishers can use the data to:

  • Confirm which pages are already cited in AI answers.
  • Identify topics that consistently appear across AI-generated responses.
  • Improve clarity, structure, and completeness on indexed pages that are cited less often.

The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.

What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”

Microsoft’s announcement. Introducing AI Performance in Bing Webmaster Tools Public Preview 

How to make automation work for lead gen PPC

10 February 2026 at 21:00

B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.

Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.

The good news? Automation can still work.

Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.

The fundamental challenge: Why automation struggles with lead gen

Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:

  • Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
  • Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
  • The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.

The solution: Sending the right signals

Despite these challenges, proven strategies can make automation work for B2B lead generation.

Offline conversions: Your number one priority

Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.

In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:

  • HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
  • Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
  • Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.

Embrace micro conversions with strategic values

Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.

The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:

  • Video views (value: 1): Shows curiosity, but qualification is unclear.
  • Ungated asset downloads (value: 10): Indicates stronger engagement and added effort.
  • Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
  • Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.

This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.

Making Performance Max work for lead generation

You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.

But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.

One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:

  • Leads increased 150%
  • Opportunities increased 350%
  • Closed deals increased 200%

Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.

Campaign-specific goals: An underutilized feature

Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.

You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:

  • Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
  • Build audiences from those form fills to capture engaged prospects.
  • Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.

This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.

Portfolio bidding: Reaching the data threshold faster

Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.

For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.

You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.

Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.

First-party audiences: Powerful targeting signals

First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.

If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:

  • Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
  • Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.

Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.

Leveraging AI for B2B lead generation

AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.

The essential B2B prompt addition

Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.

Client onboarding and profile creation

Use AI to build detailed client profiles by feeding it clear inputs, including:

  • What you sell and your core value.
  • Your unique selling propositions.
  • Target personas.
  • Ideal customer profiles.

Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.

Competitor research in minutes, not hours

Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:

  • Current offers
  • Positioning and messaging
  • Value propositions
  • Customer sentiment
  • Social proof
  • Pricing strategies

AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.

Competitor keyword analysis

Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:

  • Identify keywords competitors rank for that you don’t to uncover gaps to fill.
  • Identify keywords you own that competitors don’t to surface unique advantages.
  • Group keywords by theme to reveal patterns and inform campaign structure.

What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.

Automating routine tasks

  • Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
  • Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.

Experiments: testing what works

The Experiments feature is widely underused. Put it to work by testing:

  • Different bid strategies, including portfolio vs. standard
  • Match types
  • Landing pages
  • Campaign structures

Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.

Solutions: Pre-built scripts made easy

Solutions are prebuilt Google Ads scripts that automate common tasks, including:

  • Reporting and dashboards
  • Anomaly detection
  • Link checking
  • Flexible budgeting
  • Negative keyword list creation

Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.

Key takeaways

Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.

  • Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
  • AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
  • Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.

Watch: It’s time to embrace automation for B2B lead gen 

💾

Automation isn’t just for ecommerce. Learn how to drive more leads, cut costs, improve quality, and save time with AI-powered campaigns.

Why governance maturity is a competitive advantage for SEO

10 February 2026 at 19:00
How SEO governance shifts teams from reaction to prevention

Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata. 

Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.

Sound familiar?

Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.

This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.

Governance isn’t bureaucracy – it’s your insurance policy

I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.

The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.

Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.

This maturity model:

  • Protects your work from being undone by releases you weren’t consulted on.
  • Documents your standards so you’re not explaining canonical tags for the 47th time.
  • Establishes clear ownership so you’re not expected to fix everything across six different teams.
  • Gets you a seat at the table when decisions affecting SEO are being made.
  • Makes your expertise visible to leadership in ways they understand.

The real problem: AI just made everything harder

Remember when SEO was mostly about your website and Google? Those were simpler times.

Now you’re trying to optimize for:

  • AI Overviews that rewrite your content.
  • ChatGPT citations that may or may not link back.
  • Perplexity summaries that pull from competitors.
  • Voice assistants that only cite one source.
  • Knowledge panels that conflict with your site.

And you’re still dealing with:

  • Content teams who write AI-generated fluff.
  • Developers who don’t understand crawl budget.
  • Product managers who launch features that break structured data.
  • Marketing directors who want “just one small change” that tanks rankings.

Without governance, you’re the only person who understands how all these pieces fit together. 

When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.

You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.

Dig deeper: Why most SEO failures are organizational, not technical

What VGMM actually measures – in terms you care about

VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:

Level 1: Unmanaged (your current nightmare)

  • Nobody knows who’s responsible for SEO decisions.
  • Changes happen without SEO review.
  • You discover problems after they’ve tanked traffic.
  • You’re constantly firefighting.
  • Documentation doesn’t exist or is ignored.

Level 2: Aware (slightly better)

  • Leadership admits SEO matters.
  • Some standards exist but aren’t enforced.
  • You have allies but no authority.
  • Improvements happen but get reversed next quarter.
  • You’re still the only one who really gets it.

Level 3: Defined (getting somewhere)

  • SEO ownership is documented.
  • Standards exist, and some teams follow them.
  • You’re consulted before major changes.
  • QA checkpoints include SEO review.
  • You’re working normal hours most weeks.

Level 4: Integrated (the dream)

  • SEO is built into release workflows.
  • Automated checks catch problems before they ship.
  • Cross-functional teams share accountability.
  • You can actually take a vacation without a disaster.
  • Your expertise is respected and resourced.

Level 5: Sustained (unicorn territory)

  • SEO survives leadership changes.
  • Governance adapts to new AI surfaces automatically.
  • Problems are caught before they impact traffic.
  • You’re doing strategic work, not firefighting.
  • The organization values prevention over reaction.

Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.

Dig deeper: SEO’s future isn’t content. It’s governance

How VGMM works: The less boring explanation

VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.

It evaluates maturity across domains like:

  • SEO governance: Your core competency.
  • Content governance: Are writers following standards?
  • Performance governance: Is the site actually fast?
  • Accessibility governance: Is the site inclusive?
  • Workflow governance: Do processes exist and work?

Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.

VGMM produces a 0–100% score based on:

  • Domain scores: How mature is each area?
  • Weighting: Which domains matter most for your business?
  • Dependencies: Are weaknesses in one area breaking strengths in another?
  • Coherence: Do decision rights and accountability actually align?

The final score isn’t about effort – it’s about whether governance actually works.

Get the newsletter search marketers rely on.


What this means for your daily life

Before VGMM-style governance:

  • Product launches a redesign → You find out when traffic drops.
  • Content team uses AI → You discover thin content in Search Console.
  • Dev changes URL structure → You spend a week fixing redirects.
  • Marketing wants “quick changes” → You explain why it’s not quick (again).
  • Site goes down → Everyone asks why you didn’t catch it.

After governance maturity improves:

  • Product can’t launch without SEO sign-off.
  • Content AI usage has review checkpoints.
  • URL changes require documented SEO approval.
  • Marketing requests go through defined workflows.
  • Site monitoring includes automated SEO health checks.

You move from reactive firefighting to proactive prevention. Your weekends become yours again.

The supporting models: What they actually check

VGMM doesn’t score you on technical SEO execution. It checks whether the organization has processes in place to prevent SEO disasters.

SEO Governance Maturity Model (SEOGMM) asks:

  • Are there documented SEO standards?
  • Who can override them, and how?
  • Do templates enforce SEO requirements?
  • Are there QA checkpoints before releases?
  • Can SEO block launches that will cause problems?

Content Governance Maturity Model (CGMM) asks:

  • Are content quality standards documented?
  • Is AI-generated content reviewed?
  • Are writers trained on SEO basics?
  • Is there a process for updating outdated content?

Website Performance Maturity Model (WPMM) asks:

  • Are Core Web Vitals monitored?
  • Can releases be rolled back if they break performance?
  • Is there a performance budget?
  • Are third-party scripts governed?

You get the idea. Each domain has its own checklist, and VGMM shows leadership where gaps create risk.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

How to pitch this to your boss

You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.

  • Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
  • Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
  • Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
  • Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
  • The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”

What success actually looks like

Organizations with higher VGMM maturity experience measurably better outcomes:

  • Fewer unexplained traffic drops because changes are reviewed.
  • More stable AI citations because content quality is governed.
  • Less rework after launches because SEO is built into workflows.
  • Clearer accountability because ownership is documented.
  • Better resource allocation because gaps are visible to leadership.

But the real win for you personally: 

  • You stop being the hero who saves the day and become the strategist who prevents disasters. 
  • Your expertise is recognized and properly resourced. 
  • You can take actual vacations. 
  • You work normal hours most of the time.

Your job becomes about building and improving, not constantly fixing.

Getting started: Practical next steps

Step 1: Self-assessment

Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.

Step 2: Document current-state pain

Make a list of the last six months of SEO incidents:

  • Changes that weren’t reviewed.
  • Traffic drops from preventable problems.
  • Time spent fixing avoidable issues.
  • Requests that had to be explained multiple times.

This becomes your business case.

Step 3: Start with one domain

You don’t need to implement full VGMM immediately. Start with SEOGMM:

  • Document your standards.
  • Create a review checklist.
  • Establish who can approve exceptions.
  • Get stakeholder sign-off on the process.

Step 4: Show results 

Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.

Step 5: Expand systematically

Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.

Why governance determines whether SEO survives

Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.

VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.

Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.

You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.

If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.

The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”

That someone is you.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Why PPC measurement feels broken (and why it isn’t)

10 February 2026 at 18:00
Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

How SEO leaders can explain agentic AI to ecommerce executives

10 February 2026 at 17:00
How to communicate agentic AI to ecommerce leadership without the hype

Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.

Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.

What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance. 

This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.

Start by explaining what ‘agentic’ actually means

A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.

What changes is who does the work.

Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.

When speaking to executive teams, a simple framing works best:

  • “We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.” 

Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.

Keep expectations realistic and avoid the hype

Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.

Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.

This matters because leadership teams often fall into one of two traps:

  1. Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away. 
  2. Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.

SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems. 

Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.

Dig deeper: Are we ready for the agentic web?

Change the conversation from rankings to eligibility

One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”

Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.

Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.

Explain why SEO no longer sits only in marketing

Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.

Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.

SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.

Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.

Dig deeper: How to integrate SEO into your broader marketing strategy

Be clear that discovery will change first

For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.

Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.

This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.

This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.

Get the newsletter search marketers rely on.


Reframe consideration as filtering, not persuasion

The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.

In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.

This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.

In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Set honest expectations about measurement

Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.

SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.

The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.

Promote a proactive, low-risk response

The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.

Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.

Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.

This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.

Agentic AI changes the focus, not the fundamentals

For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.

This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.

Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.

For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.

That is the conversation SEO leaders should be having now.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

What repeated ChatGPT runs reveal about brand visibility

10 February 2026 at 16:00
What repeated ChatGPT runs reveal about brand visibility

We know AI responses are probabilistic – if you ask an AI the same question 10 times, you’ll get 10 different responses.

But how different are the responses?

That’s the question Rand Fishkin explored in some interesting research.

And it has big implications for how we should think about tracking AI visibility for brands.

In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.

Basically, he found that:

  • AIs rarely recommend the same list of brands in the same order twice.
  • For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.

For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:

  • Does prompt complexity affect the consistency of AI recommendations?
  • Does the competitiveness of the category affect the consistency of recommendations?

Methodology

To explore those questions, I first designed 12 prompts:

  • Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
  • Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)

I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.

Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Findings

So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?

How many ‘open slots’ in ChatGPT responses are available to brands?

On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.

How many brands does ChatGPT draw from, on average?

Competitive vs. niche categories

On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)

Simple vs. nuanced prompts

On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.

This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.

I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)

Return of the ’10 blue links’

In each individual response, ChatGPT will, on average, mention only 10 brands.

There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.

How many brands per response, on average?

But a single response typically names about 10 brands regardless of category or prompt type.

The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.

Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.

Dig deeper: How to measure your AI search brand visibility and prove business impact

Get the newsletter search marketers rely on.


How consistent are ChatGPT’s brand recommendations?

When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.

To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.

ChatGPT knows ~44 brands in your category

So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.

As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.

If you’re not a big brand, you’re better off being in a niche category:

It's easier to get good AI visibility in niche categories

When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.

In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.

Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.

The responses to nuanced prompts are harded to dominate

A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.

This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.

Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?

Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.

But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What does this mean for B2B marketers?

If you’re not a dominant brand, pick your battles – niche down

It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.

Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.

But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”

Most AI visibility tracking tools are grossly misleading

Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.

If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.

But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.

That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.

Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What’s next?

In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:

  • How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
  • Do different prompts with the same search intent tend to produce the same set of recommendations?
  • How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?

This article was originally published on Visible on beehiiv (as Most AI visibility tracking is misleading (here’s my new data)) and is republished with permission.

Reddit says 80 million people now use its search weekly

9 February 2026 at 22:56
Reddit search

Eighty million people use Reddit search every week, Reddit said on its Q4 2025 earnings call last week. The increase followed a major change: Reddit merged its core search with its AI-powered Reddit Answers and began positioning the platform as a place where users can start — and finish — their searches.

  • Executives framed the move as a response to changing behavior. People are increasingly researching products and making decisions by asking questions within communities rather than relying solely on traditional search engines.
  • Reddit is betting it can keep more of that intent on-platform, rather than acting mainly as a source of links for elsewhere.

Why we care. Reddit is becoming a place where people start — and complete — their searches without ever touching Google. For brands, that means visibility on Reddit now matters as much as ranking in traditional and AI search for many buying decisions.

Reddit’s search ambitions. CEO Steve Huffman said Reddit made “significant progress” in Q4 by unifying keyword search with Reddit Answers, its AI-driven Q&A experience. Users can now move between standard search results and AI answers in a single interface, with Answers also appearing directly inside search results.

  • “Reddit is already where people go to find things,” Huffman said, adding the company wants to become an “end-to-end search destination.”
  • More than 80 million people searched Reddit weekly in Q4, up from 60 million a year earlier, as users increasingly come to the platform to research topics — not just scroll feeds or click through from Google.

Reddit Answers is growing. Reddit Answers is driving much of that growth. Huffman said Answers queries jumped from about 1 million a year ago to 15 million in Q4, while overall search usage rose sharply in parallel.

  • He said Answers performs best for open-ended questions—what to buy, watch, or try—where people want multiple perspectives instead of a single factual answer. Those queries align naturally with Reddit’s community-driven discussions.
  • Reddit is also expanding Answers beyond text. Huffman said the company is piloting “dynamic agentic search results” that include media formats, signaling a more interactive and immersive search experience ahead.

Search is a ‘big one’ for Reddit. Huffman said the company is testing new app layouts that give search prominent placement, including versions with a large, always-visible search bar at the top of the home screen.

  • COO Jennifer Wong said search and Answers represent a major opportunity, even though monetization remains early on some surfaces.
  • Wong described Reddit search behavior as “incremental and additive” to existing engagement and often tied to high-intent moments, such as researching purchases or comparing options.

AI answers make Reddit more important. Huffman also linked Reddit’s search push to its partnerships with Google and OpenAI. He said Reddit content is now the most-cited source in AI-generated answers, highlighting the platform’s growing influence on how people find information.

  • Reddit sees AI summaries as an opportunity — to move users from AI answers into Reddit communities, where they can read discussions, ask follow-up questions, and participate.
  • If someone asks “what the best speaker is,” he said, Reddit wants users to discover not just a summary, but the community where real people are actively debating the topic.

Reddit earnings. Reddit Reports Fourth Quarter and Full Year 2025 Results; Announces $1 Billion Share Repurchase Program

OpenAI starts testing ChatGPT ads

9 February 2026 at 22:09

OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.

The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.

  • OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
  • Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.

How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.

  • For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.

User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.

  • Turning personalization off limits ads to the current chat.
  • Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.

Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.

Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.

OpenAI’s announcement.Testing ads in ChatGPT (OpenAI)

Google AI Mode doesn’t favor above-the-fold content: Study

9 February 2026 at 21:43
AI Mode depth doesn't matter

Google’s AI Mode isn’t more likely to cite content that appears “above the fold,” according to a study from SALT.agency, a technical SEO and content agency.

  • After analyzing more than 2,000 URLs cited in AI Mode responses, researchers found no correlation between how high text appears on a page and whether Google’s AI selects it for citation.

Pixel depth doesn’t matter. AI Mode cited text from across entire pages, including content buried thousands of pixels down.

  • Citation depth showed no meaningful relationship to visibility.
  • Average depth varied by vertical, from about 2,400 pixels in travel to 4,600 pixels in SaaS, with many citations far below the traditional “above the fold” area.

Page layout affects depth, not visibility. Templates and design choices influenced how far down the cited text appeared, but not whether it was cited.

  • Pages with large hero images or narrative layouts pushed cited text deeper, while simpler blog or FAQ-style pages surfaced citations earlier.
  • No layout type showed a visibility advantage in AI Mode.

Descriptive subheadings matter. One consistent pattern emerged: AI Mode frequently highlighted a subheading and the sentence that followed it.

  • This suggests Google uses heading structures to navigate content, then samples opening lines to assess relevance, behavior consistent with long-standing search practices, according to SALT.

What Google is likely doing. SALT believes AI Mode relies on the same fragment indexing technology Google has used for years. Pages are broken into sections, and the most relevant fragment is retrieved regardless of where it appears on the page.

What they’re saying. While the study examined only one structural factor and one AI model, the takeaway is clear: there’s no magic formula for AI Mode visibility. Dan Taylor, partner and head of innovation (organic and AI) at SALT.agency, said:

  • “Our study confirms that there is no magic template or formula for increased visibility in AI Mode responses – and that AI Mode is not more likely to cite text from ‘above the fold.’ Instead, the best approach mirrors what’s worked in search for years: create well-structured, authoritative content that genuinely addresses the needs of your ideal customers.
  • “…the data clearly debunks the idea that where the information sits within a page has an impact on whether it will be cited.”

Why we care. The findings challenge the idea that AI-specific templates or rigid page structures drive better AI Mode visibility. Chasing “AI-optimized” layouts may distract from work that actually matters.

About the research. SALT analyzed 2,318 unique URLs cited in AI Mode responses for high-value queries across travel, ecommerce, and SaaS. Using a Chrome bookmarklet and a 1920×1080 viewport, researchers recorded the vertical pixel position of the first highlighted character in each AI-cited fragment. They also cataloged layouts and elements, such as hero sections, FAQs, accordions, and tables of contents.

The study. Research: Does Structuring Your Content Improve the Chances of AI Mode Surfacing?

A preview of ChatGPT’s ad controls just surfaced

9 February 2026 at 21:36
OpenAI ChatGPT ad platform

A newly discovered settings panel offers a first detailed look at how ads could work inside ChatGPT, including how personalization and privacy controls are designed.

Driving the news. Entrepreneur Juozas Kaziukėnas found a way to trigger ChatGPT’s upcoming ad settings interface. The panel repeatedly stresses that advertisers won’t see user chats, history, memories, personal details, or IP addresses.

What the settings reveal. The interface lays out a structured ad system with dedicated controls:

  • A history tab logs ads users have seen in ChatGPT.
  • An interests tab stores inferred preferences based on ad interactions and feedback.
  • Each ad includes options to hide or report it.
  • Users can delete ad history and interests separately from their general ChatGPT data.

Personalization options. Users can turn ad personalization on or off. When it’s on, ChatGPT uses saved ad history and interest signals to tailor ads. When it’s off, ads still appear but rely only on the current conversation for context.

  • There’s also an option to personalize ads using past conversations and memory features, though the interface stresses that chat content isn’t shared with advertisers. Accounts with memory disabled won’t see this option active.

Why we care. This settings panel offers the clearest view yet of how ad personalization and privacy controls could work with ChatGPT ads. It points to a system built on strict privacy boundaries. The controls suggest ads will rely on contextual signals and opt-in personalization, not deep user tracking. That shift makes creative relevance and in-conversation intent more important than traditional audience profiling for brands preparing for conversational ad environments.

The bigger picture. The discovery suggests OpenAI is building an ad system that mirrors familiar controls from major ad platforms while prioritizing clear privacy boundaries and user choice.

Bottom line. ChatGPT ads aren’t live yet, but the framework is coming into focus — pointing to a future where conversational ads come with granular privacy and personalization controls.

First seen. Kaziukėnas shared the preview of the platform on LinkedIn.

What Google and Microsoft patents teach us about GEO

9 February 2026 at 19:00
https://searchengineland.com/wp-admin/post.php?post=468436&action=edit

Generative engine optimization (GEO) represents a shift from optimizing for keyword-based ranking systems to optimizing for how generative search engines interpret and assemble information. 

While the inner workings of generative AI are famously complex, patents and research papers filed by major tech companies such as Google and Microsoft provide concrete insight into the technical mechanisms underlying generative search. By analyzing these primary sources, we can move beyond speculation and into strategic action.

This article analyzes the most insightful patents to provide actionable lessons for three core pillars of GEO: query fan-out, large language model (LLM) readability, and brand context.

Why researching patents is so important for learning GEO

Patents and research papers are primary, evidence-based sources that reveal how AI search systems actually work. The knowledge gained from these sources can be used to draw concrete conclusions about how to optimize these systems. This is essential in the early stages of a new discipline such as GEO.

Patents and research papers reveal technical mechanisms and design intent. They often describe retrieval architectures, such as: 

  • Passage retrieval and ranking.
  • Retrieval-augmented generation (RAG) workflows.
  • Query processing, including query fan-out, grounding, and other components that determine which content passages LLM-based systems retrieve and cite. 

Knowing these mechanisms explains why LLM readability, chunk relevance, and brand and context signals matter.

Primary sources reduce reliance on hype and checklists. Secondary sources, such as blogs and lists, can be misleading. Patents and research papers let you verify claims and separate evidence-based tactics from marketing-driven advice.

Patents enable hypothesis-driven optimization. Understanding the technical details helps you form testable hypotheses, such as how content structure, chunking, or metadata might affect retrieval, ranking, and citation, and design small-scale experiments to validate them.

In short, patents and research papers provide the technical grounding needed to:

  • Understand why specific GEO tactics might work.
  • Test and systematize those tactics.
  • Avoid wasting effort on unproven advice.

This makes them a central resource for learning and practicing generative engine optimization and SEO. 

That’s why I’ve been researching patents for more than 10 years and founded the SEO Research Suite, the first database for GEO- and SEO-related patents and research papers.

How do you learn GEO
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why we need to differentiate when talking about GEO

In many discussions about generative engine optimization, too little distinction is made between the different goals that GEO can pursue.

One goal is improving the citability of LLMs so your content is cited more often as the source. I refer to this as LLM readability optimization.

Another goal is brand positioning for LLMs, so a brand is mentioned more often by name. I refer to this as brand context optimization.

Each of these goals relies on different optimization strategies. That’s why they must be considered separately.

Differentiating GEO

The three foundational pillars of GEO

Understanding the following three concepts is strategically critical. 

These pillars represent fundamental shifts in how machines interpret queries, process content, and understand brands, forming the foundation for advanced GEO strategies. 

They are the new rules of digital information retrieval.

LLM readability: Crafting content for AI consumption

LLM readability is the practice of optimizing content so it can be effectively processed, deconstructed, and synthesized by LLMs. 

It goes beyond human readability and includes technical factors such as: 

  • Natural language quality.
  • Logical document structure.
  • A clear information hierarchy.
  • The relevance of individual text passages, often referred to as chunks or nuggets.

Brand context: Building a cohesive digital identity

Brand context optimization moves beyond page-level optimization to focus on how AI systems synthesize information across an entire web domain. 

The goal is to build a holistic, unified characterization of a brand. This involves ensuring your overall digital presence tells a consistent and coherent story that an AI system can easily interpret.

Query fan-out: Deconstructing user intent

Query fan-out is the process by which a generative engine deconstructs a user’s initial, often ambiguous query into multiple specific subqueries, themes, or intents. 

This allows the system to gather a more comprehensive and relevant set of information from its index before synthesizing a final generated answer.

These three pillars are not theoretical. They are actively being built into the architecture of modern search, as the following patents and research papers reveal.

Patent deep dive: How generative engines understand user queries (query fan-out)

Before a generative engine can answer a question, it must first develop a clear understanding of the user’s true intent. 

The patents below describe a multi-step process designed to deconstruct ambiguity, explore topics comprehensively, and ensure the final answer aligns with a confirmed user goal rather than the initial keywords alone.

Microsoft’s ‘Deep search using large language models’: From ambiguous query to primary intent

Microsoft’s “Deep search using large language models” patent (US20250321968A1) outlines a system that prioritizes intent by confirming a user’s true goal before delivering highly relevant results. 

Instead of treating an ambiguous query as a single event, the system transforms it into a structured investigation.

The process unfolds across several key stages:

  • Initial query and grounding: The system performs a standard web search using the original query to gather context and a set of grounding results.
  • Intent generation: A first LLM analyzes the query and the grounding results to generate multiple likely intents. For a query such as “how do points systems work in Japan,” the system might generate distinct intents like “immigration points system,” “loyalty points system,” or “traffic points system.”
  • Primary intent selection: The system selects the most probable intent. This can happen automatically, by presenting options to the user for disambiguation, or by using personalization signals such as search history.
  • Alternative query generation: Once a primary intent is confirmed, a second LLM generates more specific alternative queries to explore the topic in depth. For an academic grading intent, this might include queries like “German university grading scale explained.”
  • LLM-based scoring: A final LLM scores each new search result for relevance against the primary intent rather than the original ambiguous query. This ensures only results that precisely match the confirmed goal are ranked highly.

The key insight from this patent is that search is evolving into a system that resolves ambiguity first. 

Final results are tailored to a user’s specific, confirmed goal, representing a fundamental departure from traditional keyword-based ranking.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s “thematic search” patent (US12158907B1) provides the architectural blueprint for features such as AI Overviews. The system is designed to automatically identify and organize the most important subtopics related to a query. 

It analyzes top-ranked documents, uses an LLM to generate short summary descriptions of individual passages, and then clusters those summaries to identify common themes.

The direct implication is a shift from a simple list of links to a guided exploration of a topic’s most important facets. 

This process organizes information for users and allows the engine to identify which themes consistently appear across top-ranking documents, forming a foundational layer for establishing topical consensus.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s ‘stateful chat’: Generating queries from conversation history

The concept of synthetic queries in Google’s “Search with stateful chat” patent (US20240289407A1) reveals another layer of intent understanding. 

The system generates new, relevant queries based on a user’s entire session history rather than just the most recent input. 

By maintaining a stateful memory of the conversation, the engine can predict logical next steps and suggest follow-up queries that build on previous interactions.

The key takeaway is that queries are no longer isolated events. Instead, they’re becoming part of a continuous, context-aware dialogue. 

This evolution requires content to do more than answer a single question. It must also fit logically within a broader user journey.

Google’s ‘stateful chat’: Generating queries from conversation history

Patent deep dive: Crafting content for AI processing (LLM readability)

Once a generative engine has disambiguated user intent and fanned out the query, its next challenge is to find and evaluate content chunks that can precisely answer those subqueries. This is where machine readability becomes critical. 

The following patents and research papers show how engines evaluate content at a granular, passage-by-passage level, rewarding clarity, structure, and factual density.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

The GINGER research paper introduces a methodology for improving the factual accuracy of AI-generated responses. Its core concept involves breaking retrieved text passages into minimal, verifiable information units, referred to as nuggets.

By deconstructing complex information into atomic facts, the system can more easily trace each statement back to its source, ensuring every component of the final answer is grounded and verifiable.

The lesson from this approach is clear: Content should be structured as a collection of self-contained, fact-dense nuggets. 

Each paragraph or statement should focus on a single, provable idea, making it easier for an AI system to extract, verify, and accurately attribute that information.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

Google’s span selection: Pinpointing the exact answer

Google’s “Selecting answer spans” patent (US11481646B2) describes a system that uses a multilevel neural network to identify and score specific text spans, or chunks, within a document that best answer a given question. 

The system evaluates candidate spans, computes numeric representations based on their relationship to the query, and assigns a final score to select the single most relevant passage.

The key insight is that the relevance of individual paragraphs is evaluated with intense scrutiny. This underscores the importance of content structure, particularly placing a direct, concise answer immediately after a question-style heading. 

The patent provides the technical justification for the answer-first model, a core principle of modern GEO strategy.

Google's span selection: Pinpointing the exact answer

The consensus engine: Validating answers with weighted terms

Google’s “Weighted answer terms” patent (US10019513B1) explains how search engines establish a consensus around what constitutes a correct answer.

This patent is closely associated with featured snippets, but the technology Google developed for featured snippets is one of the foundational methodologies behind passage-based retrieval used today by AI search systems to select passages for answers.

The system identifies common question phrases across the web, analyzes the text passages that follow them, and creates a weighted term vector based on terms that appear most frequently in high-quality responses. 

For a query such as “Why is the sky blue?” terms like “Rayleigh scattering” and “atmosphere” receive high weights.

The key lesson is that to be considered an accurate and authoritative source, content must incorporate the consensus terminology used by other expert sources on the topic. 

Deviating too far from this established vocabulary can cause content to be scored poorly for accuracy, even when it is factually correct.

Get the newsletter search marketers rely on.


Patent deep dive: Building your brand’s digital DNA (brand context)

While earlier patents focus on the micro level of queries and content chunks, this final piece operates at the macro level. The engine must understand not only what is being said but also who is saying it. 

This is the essence of brand context, representing a shift from optimizing individual pages to projecting a coherent brand identity across an entire domain. 

The following patent shows how AI systems are designed to interpret an entity by synthesizing information from across its full digital presence.

Google’s entity characterization: The website as a single prompt

The methodology described in Google’s “Data extraction using LLMs” patent (WO2025063948A1) outlines a system that treats an entire website as a single input to an LLM. The system scans and interprets content from multiple pages across a domain to generate a single, synthesized characterization of the entity. 

This is not a copy-and-paste summary but a new interpretation of the collected information that is better suited to an intended purpose, such as an ad or summary, while still passing quality checks that verbatim text might fail.

The patent also explains that this characterization is organized into a hierarchical graph structure with parent and leaf nodes, which has direct implications for site architecture:

Patent conceptCorresponding GEO strategy
Parent Nodes (Broad attributes like “Services”)Create broad, high-level “hub” pages for core business categories (e.g., /services/).
Leaf Nodes (Specific details like “Pricing”)Develop specific, granular “spoke” pages for detailed offerings (e.g., /services/emergency-plumbing/).

The key implication is that every page on a website contributes to a single brand narrative.

Inconsistent messaging, conflicting terminology, or unclear value propositions can cause an AI system to generate a fragmented and weak entity characterization, reducing a brand’s authority in the system’s interpretation.

Google’s entity characterization: The website as a single prompt

The GEO playbook: Actionable lessons derived from the patents

These technical documents aren’t merely theoretical. They provide a clear, actionable playbook for aligning content and digital strategy with the core mechanics of generative search. The principles revealed in these patents form a direct guide for implementation.

Principle 1: Optimize for disambiguated intent, not just keywords

Based on the “Deep Search” and “Thematic Search” patents, the focus must shift from targeting single keywords to comprehensively answering the specific, disambiguated intents a user may have.

Actionable advice 

  • For a target query, brainstorm the different possible user intents. 
  • Create distinct, highly detailed content sections or separate pages for each one, using clear, question-based headings to signal the specific intent being addressed.

Principle 2: Structure for machine readability and extraction

Synthesizing lessons from the GINGER paper, the “answer spans” patent, and LLM readability guidance, it’s clear that structure is critical for AI processing.

Actionable advice

Apply the following structural rules to your content:

  • Use the answer-first model: Structure content so the direct answer appears immediately after a question-style heading. Follow with explanation, evidence, and context.
  • Write in nuggets: Compose short, self-contained paragraphs, each focused on a single, verifiable idea. This makes each fact easier to extract and attribute.
  • Leverage structured formats: Use lists and tables whenever possible. These formats make data points and comparisons explicit and easily parsable for an LLM.
  • Employ a logical heading hierarchy: Use H1, H2, and H3 tags to create a clear topical map of the document. This hierarchy helps an AI system understand the context and scope of each section.

Principle 3: Build a unified and consistent entity narrative

Drawing directly from the “Data extraction using LLMs” patent, domainwide consistency is no longer a nice-to-have. It’s a technical requirement for building a strong brand context.

Actionable advice

  • Conduct a comprehensive content audit. 
  • Ensure mission statements, service descriptions, value propositions, and key terminology are used consistently across every page, from the homepage to blog posts to the site footer.

Principle 4: Speak the language of authoritative consensus

The “Weighted answer terms” patent shows that AI systems validate answers by comparing them against an established consensus vocabulary.

Actionable advice

  • Before writing, analyze current featured snippets, AI Overviews, and top-ranking documents for a given query. 
  • Identify recurring technical terms, specific nouns, and phrases they use. 
  • Incorporate this consensus vocabulary to signal accuracy and authority.

Principle 5: Mirror the machine’s hierarchy in your architecture

The parent-leaf node structure described in the entity characterization patent provides a direct blueprint for effective site architecture.

Actionable advice

  • Design site architecture and internal linking to reflect a logical hierarchy. Broad parent category pages should link to specific leaf detail pages. 
  • This structure makes it easier for an LLM to map brand expertise and build an accurate hierarchical graph.

These five principles aren’t isolated tactics. 

They form a single, integrated strategy in which site architecture reinforces the brand narrative, content structure enables machine extraction, and both align to answer a user’s true, disambiguated intent.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Aligning with the future of information retrieval

Patents and research papers from the world’s leading technology companies offer a clear view of the future of search. 

Generative engine optimization is fundamentally about making information machine-interpretable at two critical levels: 

  • The micro level of the individual fact, or chunk.
  • The macro level of the cohesive brand entity. 

By studying these documents, you can shift from a reactive approach of chasing algorithm updates to a proactive one of building digital assets aligned with the core principles of how generative AI understands, structures, and presents information.

Why GA4 alone can’t measure the real impact of AI SEO

9 February 2026 at 18:00
Why GA4 alone can’t measure the real impact of AI SEO

If you’re relying on GA4 alone to measure the impact of AI SEO, you’re navigating with a broken compass.

Don’t misunderstand me. It’s a reasonable launch pad. But to understand how audiences discover, evaluate, and ultimately choose brands, measurement must move beyond the bounds of Google’s tooling.

SEO is a journey, not a destination. If you optimize only for attributable visits, large parts of that journey disappear from view.

Sessions are an outcome. They can’t contextualize consideration sets increasingly shaped by algorithms and AI well before a visit ever happens.

Don’t lose potential customers in the Bermuda Triangle of traditional SEO measurement. Harness the power of share of voice to steer user intent. Guide them to you by mapping your brand visibility in AI analytics.

Measuring AI visits with GA4

Links are becoming more prevalent in AI systems. Traffic is climbing. GA4 makes it easy to set up a custom report to track these sessions.

Create an exploration with “session source / medium” as the dimension and “sessions” as the metric. Then apply this regex filter on the referrer:

.*(chatgpt|openai|claude|gemini|bard|copilot|perplexity|you\.com|meta\.ai|grok|huggingface|deepseek|mistral|manus|alexaplus|edgeservices|poe).*
Measuring AI visits with GA4

Don’t be concerned if the output report is messy. That’s normal. Many AI systems send multiple sets of partial referral information. Some send none at all, so sessions appear as dark traffic.

This report is an easy first step. But don’t be fooled into thinking it can measure the impact of AI on your brand on its own.

The most viewed AI outputs – Google’s AI Overviews and AI Mode – can’t be seen here. They are attributed to either “google / organic” or “(direct) / (none),” depending on how the user accessed Google.

With these limitations, looking only at GA4 traffic from generative AI is not a holistic enough data source to understand the reality of usage by your target audience and the impact on your brand.

Other data sources are needed.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Bing Webmaster Tools technically reports Copilot data. But in the most Microsoftesque fashion, chat is combined with web metrics, obscuring the chat data and making the report ineffective for understanding the impact of generative AI.

This approach laid the foundation for Google Search Console to do the same. AI Overviews and AI Mode impressions and clicks are lumped in with Search, and the Gemini app is not included at all.

What you can do is look for more conversational-style queries using a Google Search Console regex, such as:

^(who|what|whats|when|where|wheres|why|how|which|should)\b|.*\b(benefits of|difference between|advantages|disadvantages|examples of|meaning of|guide to|vs|versus|compare|comparison|alternative|alternatives|types of|ways to|tips|pros|cons|worth it|best|top)\b.*

But this is becoming less valuable as query fan-out becomes the standard, making synthetic queries indistinguishable from human queries while inflating impression numbers.

Worse, both GSC and BWT will become increasingly myopic as websites are bypassed by MCP connections or accessed directly by AI agents.

Again, other data sources are needed.

Get the newsletter search marketers rely on.


AI agent analytics with log files

Both Google and ChatGPT offer AI agents that can browse and, with permission, convert on a human’s behalf.

When an AI agent uses a text-based browser, it can’t be tracked by cookie-based analytics.

If the agent switches to a visual browser, it often accepts cookies, 78% of the time in my testing. But this creates problems in GA4:

  • Odd engagement metrics. These are agent behaviors, not human ones.
  • An unnatural resurgence of desktop traffic. Agents use desktop browsers exclusively.
  • An uptick in Chrome. Agents run on Chromium.

On the plus side, agentic conversions are recorded, but they are attributed to direct traffic.

As a result, many SEOs are turning to bot logs, where AI agent requests can be identified. But those requests are not a headcount of humans sending agents to complete tasks.

AI agents - bot logs

When an agent renders a page in a visual browser, it fires multiple requests for every asset. CSS. JS. Images. Fonts. A bloated front end equals inflated request counts, making raw volume a vanity metric.

The insight lies not in totals, but in paths.

Most popular paths by crawler

Follow the request flow through the site to the conversion success page. If there are plenty of requests but none reach the conversion path, you know the journey is broken.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Dig deeper: How to segment traffic from LLMs in GA4

Traditional SEO reporting isn’t up to the task of tracking AI

To track the impact of AI SEO, you need to reassess your reporting. 

Its benefits extend beyond the bounds of GA4, Google Search Console, and log file analysis, all of which assume the user reached your website directly from an AI surface. That’s not required for brand value.

Many SEO tools are now adding AI tracking, or are built entirely around it. The methodology is imperfect, with chat outcomes that are probabilistic, not deterministic. It’s similar to running focus groups.

With an unbiased sample, unbiased prompts, and regular testing, the resulting trends are valuable, even if any individual response is not. They reveal the set of brands an AI system associates with a given intent, forming a consensus view of a credible consideration set.

But AI search analytics tools are not all created equal.

Make sure your tool tracks not only website citations, but also in-chat brand mentions and citations of brand assets such as social media profiles, videos, map listings, and apps.

These are no less valuable than a website link. Recognizing this reflects SEO’s growth.

As an industry, we are returning to meaningful marketing KPIs like share of voice. Understanding brand visibility for relevant intents is what ultimately drives market share.

It’s not SEO’s job to optimize a website. It’s to build a well-known, top-rated, and trusted digital brand. That is the foundation of visibility across every organic surface.

How to diagnose and fix the biggest blocker to PPC growth

9 February 2026 at 17:00
Why PPC optimization fails to scale and how to find the real constraint

We’ve all been there. A client wants to scale their Google Ads account from €10,000 per month to €100,000. So, you do what any good PPC manager would do:

  • Refine your bidding strategy.
  • Test new ad copy variations.
  • Expand your keyword portfolio.
  • Optimize landing pages.
  • Improve Quality Scores.
  • Launch Performance Max campaigns.

Three months later, you’ve increased ad spend by 15%. The client is… fine with it. But you know you should be doing better.

Here’s the uncomfortable truth: Most pay-per-click (PPC) optimization work is sophisticated procrastination.

What the theory of constraints teaches us about PPC

The theory of constraints, developed by Eliyahu Goldratt for manufacturing systems, reveals something counterintuitive. Every system is limited by exactly one bottleneck at any given time.

Making your marketing team twice as efficient won’t help if production capacity is the constraint. Similarly, improving your ad copy click-through rate (CTR) by 20% won’t move the needle if your real constraint is budget approval or landing page conversion rate.

The theory demands radical focus. Identify the single weakest link and treat everything else as less important.

Applied to PPC, this means: Stop optimizing everything. Find your number one constraint. Fix only that and then move on.




7 constraints that prevent PPC scaling

In my years of managing PPC accounts, I’ve found that almost every scaling challenge falls into one of seven categories:

1. Budget

Signal: You could profitably spend more, but you’re capped by client approval.

Example: Your campaigns are profitable at €10,000 per month with room to spend €50,000, but your client won’t approve the additional budget. Sometimes it’s risk aversion, but other times it’s a cash flow issue. 

The fix: Build a business case demonstrating profitability with a higher spend. Show historical return on ad spend (ROAS), competitive benchmarks, and projected returns.

What to ignore: Avoid ad copy testing, keyword expansion, bidding optimization, and new campaigns. None of this matters if you can’t spend more money anyway.

Dig deeper: PPC campaign budgeting and bidding strategies

2. Impression share

Signal: You’re already capturing 90%+ impression share and can’t buy more traffic.

Example: You’re targeting a niche B2B market with only 1,000 relevant searches per month.

The fix: Expand to related keywords or use broader match types. Alternatively, enter new geographic markets or add complementary platforms like Microsoft Ads or LinkedIn Ads.

What to ignore: Don’t worry about bidding optimization, since you’re already buying almost all available impressions.

3. Creative

Signal: You have high impression share but low CTRs, resulting in a premium cost per click (CPC).

Example: You’re showing ads on 80% of searches, but CTR is 2% when the industry average is 5%.

The fix: Aggressively test ad copy, better message-market fit, and more compelling.

What to ignore: Avoid keyword expansion. Your ads are already visible, they just aren’t getting clicks.

4. Conversion rate

Signal: You’re generating strong traffic volume and acceptable CPC, but terrible conversion rates.

Example: You’re getting 10,000 clicks per month. But you have a 1% conversion rate when you should be getting 5%+.

The fix: Optimize landing pages, improve offers, and refine sales funnels.

What to ignore: Don’t launch more traffic campaigns. You’re already wasting the traffic you have.

5. Fulfillment

Signal: Your campaigns could generate more leads. But the client’s sales or operations team can’t handle more.

Example: You’re generating 500 leads per month, but sales can only process 100.

The fix: This is a client operations issue, not a PPC issue. Help them identify it, but know that the solution lies outside your control. Do more business consulting for your client while maintaining the current PPC level.

What to ignore: Pause all PPC optimization, as the system can’t absorb more volume.

6. Profitability

Signal: You can scale volume, but cost per acquisition (CPA) is too high to be profitable.

Example: You need €50 CPA to break even, but you’re currently at €80 CPA.

The fix: Improve unit economics through better targeting or creative optimization. Alternatively, help the client rethink their pricing or improve customer lifetime value (LTV).

What to ignore: Set aside volume tactics until the economics work at the current scale.

7. Tracking or attribution

Signal: Attribution is broken, so you can’t confidently scale the campaign.

Example: You’re seeing complex multi-touch customer journeys where you can’t definitively prove PPC’s contribution.

The fix: Implement better tracking and test different tracking stacks (e.g., server-side, fingerprinting, or cookie-based). You can also update your attribution modeling or develop first-party data capabilities.

What to ignore: Avoid scaling any channel until you know what actually drives results.

Dig deeper: How to track and measure PPC campaigns

Get the newsletter search marketers rely on.


The diagnostic framework

Identifying your constraint requires methodical analysis rather than gut feeling. Here’s how to uncover what’s holding your account back.

Run an audit

Start by benchmarking critical metrics:

  • Impression share: If you’re capturing less than 50% of available impressions, your constraint is likely budget or bids preventing you from competing effectively.
  • CTR: Performance below industry benchmarks signals a creative constraint where your messaging isn’t resonating with searchers.
  • CPC: Unusually high CPCs often indicate a Quality Score constraint, which reflects poor ad relevance or landing page experience.
  • Conversion rate: If this metric lags compared to historical performance or industry standards, your constraint is the landing page.
  • Search volume: If you’ve already captured the majority of relevant searches, your constraint is inventory exhaustion.

Don’t overlook operational metrics either. Check fulfillment capacity by determining how many leads your client’s team can handle per month.

Finally, document your approved budget against what you could profitably spend. If there’s a sizable difference, budget approval is your primary constraint.

Ask the critical question

With your audit complete, resist the temptation to create a prioritized list. Instead, force yourself to answer one question: “If I could only fix one of these metrics, which would unlock 10x growth?”

That single metric is your constraint. Everything else, regardless of how suboptimal it appears, is secondary until you’ve broken through this bottleneck.

Apply radical focus

Once you’ve identified your primary constraint, it’s time to change your entire approach. This is where marketers tend to fail. They acknowledge the constraint but continue hedging their efforts across multiple fronts.

Why constraints are dynamic (and why that’s good)

Understanding constraint theory means recognizing that bottlenecks shift as you scale.

Consider a typical scaling journey. In month one, you’re stuck at a €10,000 monthly budget despite profitable performance metrics.

Your constraint is budget, so leadership won’t approve more ad spend. You build the business case, secure approval, and immediately scale to €30,000 monthly spend.

Success, right? Not quite. You’ve just revealed the next constraint.

By month two, you’re capturing 95% of core keyword inventory. Your new constraint is impression share, as you’ve exhausted available traffic in your primary audience.

The fix is to expand to related terms and broader match types to bring new searchers into your funnel. This expansion takes you to €50,000 per month.

Month three presents a new challenge. Your expanded traffic converts at 2% while your original core traffic maintains 5% conversion rates. Your constraint has shifted to conversion rate.

The broader audience needs different messaging or a modified landing page experience. So, you focus exclusively on improving the post-click experience until conversion rate recovers to 4%. This lets you scale to €80,000 per month.

By month four, your sales team is drowning in 500 leads per month, which more than they can effectively manage. Your constraint shifts from the PPC account to fulfillment capacity. The client hires additional sales staff to handle volume, and you scale to €120,000 per month.

Each new constraint is proof you’ve graduated to the next level. Many accounts never experience the problem of fulfillment constraints because they never break through the earlier barriers of budget and inventory.

Common traps to avoid when scaling PPC

The ‘optimize everything’ approach

When you try to optimize everything, you might spend:

  • 10 hours optimizing ad copy (+0.2% CTR)
  • 10 hours improving landing page (+0.5% CVR)
  • 10 hours refining bid strategy (+3% efficiency)

After investing 30 hours, you only achieve 5% account growth.

Instead, identify the primary constraint (e.g., conversion rate).Then, invest all 30 hours in landing page optimization. Continue to monitor your conversion rate.

Shiny object syndrome

Say your budget is capped by the client at €10,000 by client. But you spend 20 hours testing Performance Max because it’s new and interesting.

After running those tests, you achieve zero scale. And your budget is still capped at €10,000.

Instead, recognize that your primary constraint is budget approval. Build a business case, secure approval, and start scaling immediately.

Analysis paralysis

If you wait for perfect Google Analytics 4 tracking before scaling,  competitors may move forward with good enough attribution.

This can mean losing six months with no scale.

Aim for 80% accurate tracking. Perfect attribution is rarely the actual constraint.

How to implement the theory of constraints in your agency or in-house team

For your next client strategy call

Don’t say: “We’ll optimize your campaigns across multiple dimensions, bidding, creative, targeting, and see what drives the best results.”

Instead, say this: “Before we optimize anything, I need to diagnose your constraint. Once I identify it, I’ll focus exclusively on fixing that bottleneck while maintaining everything else. When it’s resolved, we’ll tackle the next constraint. This is how we’ll reach your goals.”

For your team

Implement a Constraint Monday ritual. Every Monday, each account manager identifies the primary constraint for their top three accounts. The team focuses the week’s efforts on moving those specific constraints.

On Friday, review the results. Did the constraint move?

  • If yes, what’s the new constraint?
  • If not, you had the wrong diagnosis. Try again.

From tactical to strategic PPC scaling

The difference between a good PPC manager and a great one isn’t technical skill. Instead, it’s the ability to identify constraints.

Good PPC managers optimize everything and achieve incremental gains. Great PPC managers identify the one thing preventing scale and fix only that, achieving exponential gains.

When you master the theory of constraints, you stop being seen as a tactical campaign manager and start being recognized as a strategic growth partner.

You’re no longer reporting on CTR improvements and Quality Score gains. You’re diagnosing business constraints and unlocking growth that seemed impossible.

That’s the shift that transforms PPC careers and accounts.

Amanda Farley talks broken pixels and calm leadership

7 February 2026 at 06:27

On episode 340 of PPC Live The Podcast, I speak to Amanda Farley, CMO of Aimclear and a multi-award-winning marketing leader, brings a mix of honesty and expertise to the PPC Live conversation. A self-described T-shaped marketer, she combines deep PPC knowledge with broad experience across social, programmatic, PR, and integrated strategy. Her journey — from owning an gallery and tattoo studio to leading award-winning global campaigns — reflects a career built on curiosity, resilience, and continuous learning.

Overcoming limiting beliefs and embracing creativity

Amanda once ran an gallery and tattoo parlor while believing she wasn’t an artist herself. Surrounded by creatives, she eventually realized her only barrier was a limiting belief. After embracing painting, she created hundreds of artworks and discovered a powerful outlet for expression.

This mindset shift mirrors marketing growth. Success isn’t just technical — it’s mental. By challenging internal doubts, marketers can unlock new skills and opportunities.

When campaign infrastructure breaks: A high-stakes lesson

Amanda recalls a global campaign where tracking infrastructure failed across every channel mid-flight. Pixels broke, data vanished, and campaigns were running blind. Multiple siloed teams and a third-party vendor slowed resolution while budgets continued to spend.

Instead of assigning blame, Amanda focused on collaboration. Her team helped rebuild tracking and uncovered deeper data architecture issues. The crisis led to stronger onboarding processes, earlier validation checks, and clearer expectations around data hygiene. In modern PPC, clean infrastructure is essential for machine learning success.

The hidden importance of PPC hygiene

Many account audits reveal the same problem: neglected fundamentals. Basic settings errors and poorly maintained audience data often hurt performance before strategy even begins.

Outdated lists and disconnected data systems weaken automation. In an machine-learning environment, strong data hygiene ensures campaigns have the quality signals they need to perform.

Why integrated marketing is no longer optional

Amanda’s background in psychology and SEO shaped her integrated approach. PPC touches landing pages, user experience, and sales processes. When conversions drop, the issue may lie outside the ad account.

Understanding the full customer journey allows marketers to diagnose problems holistically. For Amanda, integration is a practical necessity, not a buzzword.

AI, automation, and the human factor

While AI dominates industry conversations, Amanda stresses balance. Some tools are promising, but not all are ready for full deployment. Testing is essential, but human oversight remains critical.

Machines optimize patterns, but humans judge emotion, messaging, and brand fit. Marketers who study changing customer journeys can also find new opportunities to intercept audiences across channels.

Building a culture that welcomes mistakes

Amanda believes leaders act as emotional barometers. Calm investigation beats reactive blame when issues arise. Many PPC problems stem from external changes, not individual failure.

By acknowledging stress and focusing on solutions, leaders create psychological safety. This environment encourages experimentation and turns mistakes into learning opportunities.

Testing without fear in an changing landscape

Marketing is entering another experimental era with no clear rulebook. Amanda encourages teams to dedicate budget to testing and lean on professional communities for insight.

Not every experiment will succeed, but each provides data that informs smarter future decisions.

The tasmanian devil who practices yoga

Amanda describes her career as If the Tasmanian Devil Could Do Yoga — a blend of fast-paced chaos and intentional calm. It reflects modern marketing: demanding, unpredictable, and balanced by thoughtful leadership.

💾

Amanda Farley shares lessons on overcoming setbacks and balancing AI with human insight in modern marketing leadership.

The latest jobs in search marketing

7 February 2026 at 00:02
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About Us At Ideal Living, we believe everyone has a right to pure water, clean air, and a solid foundation for wellness. As the parent company of leading wellness brands AirDoctor and AquaTru, we help bring this mission to life daily through our award-winning, innovative, science-backed products. For over 25 years, Los Angeles-based Ideal Living […]
  • About US: Abacus Business Computer (abcPOS) is a New York City-based technology company specializing in comprehensive point-of-sale (POS) systems and integrated payment solutions. With over 30 years of industry expertise, abcPOS offers an all-in-one platform that combines POS systems, merchant services, and growth-focused marketing tools. Serving more than 6,000 businesses and supporting over 40,000 devices, […]
  • Responsibilities: Execute full on-page SEO optimization (titles, meta, internal linking, structure) Deliver Local SEO improvements (Google Business Profile optimization, citations) Perform technical SEO audits and implement clear action plans Conduct keyword research for competitive local markets Build and manage SEO content plans focused on ranking and leads Provide monthly reporting with measurable ranking + traffic […]
  • Job/Role Overview: We’re hiring a modern digital marketer who understands that today’s marketing is AI-assisted, data-driven, and constantly evolving. This role is ideal for a recent college graduate or early-career professional trained in today’s digital and AI-focused programs – not outdated marketing playbooks. If you actively use AI tools, enjoy testing ideas, and think in […]
  • Job Description Job Title: Graphic Design & Digital Marketing Specialist Location: Hybrid / Remote (Huntersville, NC preferred) Employment Type: Full Time About Everblue Everblue is a mission-driven company dedicated to transforming careers and improving organizational efficiency. We provide training, certifications, and technology-driven solutions for contractors, government agencies, and nonprofits. Our work modernizes outdated processes, enhances […]
  • 📌 Job Title: On-Page SEO Specialist 📅 Experience: 5+ Years ⏰ Schedule: 8 AM – 5 PM CST 💰 Compensation: $10-$15/hour (based on experience) 🏡 Fully Remote | Full-time Contract Position 🌟 Job Overview We’re looking for a seasoned On-Page SEO Specialist to optimize and enhance our website’s on-page SEO performance while driving multi-location performance […]
  • Job Description MID AMERICA GOLF AND MID AMERICA SPORTS CONSTRUCTION is a leading provider of Golf and Sports construction services and synthetic turf installations, specializing in high-quality residential and commercial projects. We pride ourselves on transforming spaces with durable, eco-friendly solutions that enhance aesthetics and functionality. We’re seeking a dynamic marketing professional to elevate our […]
  • About Us Would you like to be part of a fast-growing team that believes no one should have to succumb to viral-mediated cancers? Naveris, a commercial stage, precision oncology diagnostics company with facilities in Boston, MA and Durham, NC, is looking for a Senior Digital Marketing Associate team member to help us advance our mission […]
  • About the Role We’re looking for a data-driven Marketing Strategist to support leadership and assist with optimizing our paid and organic growth efforts. This role sits at the intersection of PPC strategy, SEO execution, and performance analysis—ideal for someone who loves turning insights into measurable results. You’ll be responsible for documenting, executing, and optimizing campaigns […]
  • Job Description Salary: $75,000-$90,000 Hanson is seeking a data-driven strategist to join our team as a Digital Marketing Strategist. This role bridges the gap between marketing strategy, analytics and technology to help ensure our clients websites and digital tools perform at their highest potential. Youll work closely with cross-functional teams to optimize digital experiences, drive […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Summary If you are a person that has work ethic, wants to really grow a company along with personal and financial may be your company. We are seeking a dynamic and creative Social Media and Marketing Specialist to lead our digital marketing efforts. This role involves developing and executing innovative social media strategies, managing […]
  • About Rock Salt Marketing Rock Salt Marketing was founded in 2023 by digital marketing experts that wanted to break from the industry norms by treating people right and providing the quality services that clients expect for honest fees. At Rock Salt Marketing, we prioritize our relationships with both clients and team members, and are committed […]
  • Type: Remote (Full-Time) Salary: Up to $1,500/month (MAX) Start: Immediate Responsibilities Launch and manage Meta Ads campaigns (Facebook/Instagram) Launch and manage Google Ads Search campaigns Build retargeting + conversion tracking systems Daily optimization focused on ROI and lead quality Manage multiple client accounts under performance expectations Weekly reporting with clear actions and next steps Requirements […]
  • Job Description At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain Data Unification, and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it’s needed, empowering data and analytics leaders with unparalleled […]
  • Job Description Paid Media Manager Location: Dallas, TX (In-Office) Compensation: $60,000–$65,000 base salary (commensurate with experience) About the Opportunity Symbiotic Services is partnering with a growing digital marketing agency to identify a Paid Media Manager for an in-office role in Dallas. This position is hands-on and execution-focused, supporting multiple client accounts while collaborating closely with […]

Other roles you may be interested in

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Performance Max built-in A/B testing for creative assets spotted

6 February 2026 at 23:29
Why campaign-specific goals matter in Google Ads

Google is rolling out a beta feature that lets advertisers run structured A/B tests on creative assets within a single Performance Max asset group. Advertisers can split traffic between two asset sets and measure performance in a controlled experiment.

Why we care. Creative testing inside Performance Max has mostly relied on guesswork. Google’s new native A/B asset experiments bring controlled testing directly into PMax — without spinning up separate campaigns.

How it works. Advertisers choose one Performance Max campaign and asset group, then define a control asset set (existing creatives) and a treatment set (new alternatives). Shared assets can run across both versions. After setting a traffic split — such as 50/50 — the experiment runs for several weeks before advertisers apply the winning assets.

Why this helps. Running tests inside the same asset group isolates creative impact and reduces noise from structural campaign changes. The controlled split gives clearer reporting and helps teams make rollout decisions based on performance data rather than assumptions.

Early lessons. Initial testing suggests short experiments — especially under three weeks — often produce unstable results, particularly in lower-volume accounts. Longer runs and avoiding simultaneous campaign changes improve reliability.

Bottom line. Performance Max is becoming more testable. Advertisers can now validate creative decisions with built-in experiments instead of relying on trial and error.

First seen. Google Ads expert spotted the update and shared his view on LinkedIn.

Google Ads adds a diagnostics hub for data connections

6 February 2026 at 22:52
Top 5 Google Ads opportunities you might be missing

Google Ads rolled out a new data source diagnostics feature in Data Manager that lets advertisers track the health of their data connections. The tool flags problems with offline conversions, CRM imports, and tagging mismatches.

How it works. A centralized dashboard assigns clear connection status labels — Excellent, Good, Needs attention, or Urgent — and surfaces actionable alerts. Advertisers can spot issues like refused credentials, formatting errors, and failed imports, alongside a run history that shows recent sync attempts and error counts.

Why we care. When conversion data breaks, campaign optimization breaks with it. Even small connection failures can quietly skew conversion tracking and weaken automated bidding. This diagnostic tool helps teams catch and fix issues early, protecting performance and reporting accuracy. If you rely on CRM imports or offline conversions, this provides a much-needed safety net.

Who benefits most. The feature is especially useful for advertisers running complex conversion pipelines, including Salesforce integrations and offline attribution setups, where small disruptions can quickly cascade into bidding and reporting issues.

The bigger picture. As automated bidding leans more heavily on accurate first-party data, visibility into data pipelines is becoming just as critical as campaign settings themselves.

Bottom line. Google Ads is giving advertisers an early warning system for data failures, helping teams fix broken connections before performance takes a hit.

First seen. The update was first spotted by digital marketer Georgi Zayakov, who shared the new option on LinkedIn.

Performance Max reporting for ecommerce: What Google is and isn’t showing you

6 February 2026 at 22:13

Performance Max has come a long way since its rocky launch. Many advertisers once dismissed it as a half-baked product, but Google has spent the past 18 months fixing real issues around transparency and control. If you wrote Performance Max off before, it’s time to take another look.

Mike Ryan, head of ecommerce insights at Smarter Ecommerce, explained why at the latest SMX Next.

Taking a fresh look at Performance Max

Performance Max traces its roots to Smart Shopping campaigns, which Google rolled out with red carpet fanfare at Google Marketing Live in 2019.

Even then, industry experts warned that transparency and control would become serious issues. They were right — and only now has Google begun to address those concerns openly.

Smart Shopping marked the low point of black-box advertising in Google Ads, at least for ecommerce. It stripped away nearly every control advertisers relied on in Standard Shopping:

  • Promotional controls.
  • Modifiers.
  • Negative keywords.
  • Search terms reporting.
  • Placement reporting.
  • Channel visibility.

Over the past 18 months, Performance Max has brought most of that functionality back, either partially or in full.

Understanding Performance Max search terms

Search terms are a core signal for understanding the traffic you’re actually buying. In Performance Max, most spend typically flows to the search network, which makes search term reporting essential for meaningful optimization.

Google even introduced a Performance Max match type — something few of us ever expected to see. That’s a big deal. It delivers properly reportable data that works with the API, should be scriptable, and finally includes cost and time dimensions that were completely missing before.

Search term insights vs. campaign search term view

Google’s first move to crack open the black box was search term insights. These insights group queries into search categories — essentially prebuilt n-grams — that roll up data at a mid-level and automatically account for typos, misspellings, and variants.

The problem? The metrics are thin. There’s no cost data, which means no CPC, no ROAS, and no real way to evaluate performance.

The real breakthrough is the new campaign-level search term view, now available in both the API and the UI.

Historically, search term reporting lived at the ad group level. Since Performance Max doesn’t use ad groups, that data had nowhere to go.

Google fixed this by anchoring search terms at the campaign level instead. The result is access to far more segments and metrics — and, finally, proper reporting we can actually use.

The main limitation: this data is available only at the search network level, without separating search from shopping. That means a single search term may reflect blended performance from both formats, rather than a clean view of how each one performed.

Search theme reporting

Search themes act as a form of positive targeting in Performance Max. You can evaluate how they’re performing through the search term insights report, which includes a Source column showing whether traffic came from your URLs, your assets, or the search themes you provided.

By totaling conversion value and conversions, you can see whether your search themes are actually driving results — or just sitting idle.

There’s more good news ahead. Google appears to be working on bringing Dynamic Search Ads and AI Max reports into Performance Max. That would unlock visibility into headlines, landing pages, and the search terms triggering ads.

Search term controls and optimization

Negative keywords

Negative keywords are now fully supported in Performance Max. At launch, Google capped campaigns at 100 negatives, offered no API access, and blocked negative keyword lists—clearly positioning the feature for brand safety, not performance.

That’s changed. Negative keywords now work with the API, support shared lists, and give advertisers real control over performance.

These negatives apply across the entire search network, including both search and shopping. Brand exclusions are the exception — you can choose to apply those only to search campaigns if needed.

Brand exclusions

Performance Max doesn’t separate brand from generic traffic, and it often favors brand queries because they’re high intent and tend to perform well. Brand exclusions exist, but they can be leaky, with some brand traffic still slipping through. If you need strict control, negative keywords are the more reliable option.

Also, Performance Max — and AI Max — may aggressively bid on competitor terms. That makes brand and competitor exclusions important tools for protecting spend and shaping intent.

Optimization strategy

Here’s a simple heuristic for spotting search terms that need attention:

  • Calculate the average number of clicks it takes to generate a conversion.
  • Identify search terms with more clicks than that average but zero conversions.

Those terms have had a fair chance to perform and didn’t. They’re strong candidates for negative keywords.

That said, don’t overcorrect.

Long-tail dynamics mean a search term that doesn’t convert this month may matter next month. You’re also working with a finite set of negative keywords, so use them deliberately and prioritize the highest-impact exclusions.

Modern optimization approaches

It’s not 2018 anymore — you shouldn’t spend hours manually reviewing search terms. Automate the work instead.

Use the API for high-volume accounts, scripts for medium volume, and automated reports from the Report Editor for smaller accounts (though it still doesn’t support Performance Max).

Layer in AI for semantic review to flag irrelevant terms based on meaning and intent, then step in only for final approval. Search term reporting can be tedious, but with Google’s prebuilt n-grams and modern AI tools, there’s a smarter way to handle it.

Channels and placements reporting

Channel performance report

The channel performance report — not just for Performance Max — breaks performance out by network, including Discover, Display, Gmail, and more. It’s useful for channel visibility and understanding view-through versus click-through conversions, as well as how feed-based delivery compares to asset-driven performance.

The report includes a Sankey diagram, but it isn’t especially intuitive. The labeling is confusing and takes some decoding:

  • Search Network: Feed-based equals Shopping ads; asset-based equals RSAs and DSAs.
  • Display Network: Feed-based equals dynamic remarketing; asset-based equals responsive display ads.

Google also announced that Search Partner Network data is coming, which should add another layer of useful performance visibility.

Channel and placement controls

Unlike Demand Gen, where you can choose exactly which channels to run on, Performance Max doesn’t give you that control. You can try to influence the channel mix through your ROAS target and budget, but it’s a blunt instrument — and a slippery one at best.

Placement exclusions

The strongest control you have is excluding specific placements. Placement data is now available through the API — limited to impressions and date segments — and can also be reviewed in the Report Editor. Use this data alongside the content suitability view to spot questionable domains and spammy placements.

For YouTube, pay close attention to political and children’s content. If a placement feels irrelevant or unsafe for your brand, there’s a good chance it isn’t driving meaningful performance either.

Tools for placement review

If you run into YouTube videos in languages you don’t speak, use Google Sheets’ built-in GOOGLETRANSLATE function. It’s faster and more reliable than AI for quick translation.

You can also use AI-powered formulas in Sheets to do semantic triage on placements, not just search terms. These tools are just formulas, which means this kind of analysis is accessible to anyone.

Search Partner Network

Unfortunately, there’s no way to opt out of the Search Partner Network in Performance Max. You can exclude individual search partners, but there are limits.

Prioritize exclusions based on how questionable the placement looks and how much volume it’s receiving. Also note that Google-owned properties like YouTube and Gmail can’t be excluded.

Based on Standard Shopping data, the Search Partner Network consistently performs meaningfully worse than the Google Search Network. Excluding poor performers is recommended.

Device reporting and targeting

Creating a device report is easy — just add device as a segment in the “when and where ads showed” view. The tricky part is making decisions.

Device analysis

For deeper insight, dig into item-level performance in the Report Editor. Add device as a segment alongside item ID and product titles to see how individual products behave across devices. Also, compare competitor performance by device — you may spot meaningful differences that inform your strategy.

For example, you may perform far better on desktop than on mobile compared to competitors like Amazon, signaling either an opportunity or a risk.

Device targeting considerations

Device targeting is available in Performance Max and is easy to use, much like channel targeting in Demand Gen. But when you split campaigns by device, you also split your conversion data and volume—and that can hurt results.

Before you separate campaigns by device, consider:

  • How competition differs by device
  • Performance at the item and retail category level
  • The impact on overall data volume

Performance Max performs best with more data. Campaigns with low monthly conversion volume often miss their targets and rarely stay on pace. As more data flows through a campaign, Performance Max gets better at hitting goals and less likely to fall short.

Any gains from splitting by device can disappear if the algorithm doesn’t have enough data to learn. Only split when both resulting campaigns have enough volume to support effective machine learning.

Conclusion

Performance Max has changed dramatically since launch. With search term reporting, negative keywords, channel visibility, placement controls, and device targeting now available, advertisers have far more transparency and control than ever before.

It’s still not perfect — channel targeting limits and data fragmentation remain — but Performance Max is fundamentally different and far more manageable.

Success comes down to knowing what data you have, how to access it efficiently using modern tools like AI and automation, and when to apply controls based on performance insights and data volume needs.

Watch: PMax reporting for ecommerce: What Google is (and isn’t) showing you

💾

Explore how to make smarter use of search terms, channel and placement reports, and device-level performance to improve campaign control.

Why content that ranks can still fail AI retrieval

6 February 2026 at 19:00
Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

6 February 2026 at 18:00
How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

6 February 2026 at 17:00
Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

Google & Bing don’t recommend separate markdown pages for LLMs

6 February 2026 at 16:24

Representatives from both the Google Search and Bing Search teams are recommending against creating separate markdown (.md) pages for LLM purposes. The purpose is to serve one piece of content to the LLM and another piece of content to your users, which technically may be considered a form of cloaking and against Google’s policies.

The question. Lily Ray asked on Bluesky:

  • “Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots.”

Google’s response. John Mueller from Google responded saying:

  • “I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

Recently, John Mueller also called the idea stupid, saying:

  • “Converting pages to markdown is such a stupid idea. Did you know LLMs can read images? WHY NOT TURN YOUR WHOLE SITE INTO AN IMAGE?” That is of course, converting your whole site to an MD file, which is a bit extreme, to say the least.

I did collect a lot of John Mueller’s comments on this topic, over here.

Bing’s response. Fabrice Canel from Microsoft Bing responded saying:

  • “Lily: really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”

Why we care. Some of us like to look for shortcuts to perform well on search engines and now the new AI search engines and LLMs. Generally, shortcuts, if they work, only work for a limited time. Plus, these shortcuts can have an unexpected negative effect.

As Lily Ray wrote on LinkedIn:

  • “I’ve had concerns the entire time about managing duplicate content and serving different content to crawlers than to humans, which I understand might be useful for AI search but directly violates search engines’ longstanding policies about this (basically cloaking).”

Your local rankings look fine. So why are calls disappearing?

5 February 2026 at 22:49
Local SEO Alligator

For many local businesses, performance looks healthier than it is.

Rank trackers still show top-three positions. Visibility reports appear steady. Yet calls and website visits from Google Business Profiles are falling — sometimes fast.

This gap is becoming a defining feature of local search today.

Rankings are holding. Visibility and performance aren’t.

The alligator has arrived in local SEO.

The visibility crisis behind stable rankings

Across multiple U.S. industries, traditional local 3-packs are being replaced — or at least supplemented — by AI-powered local packs. These layouts behave differently from the map results we’ve optimized in the past.

Analysis from Sterling Sky, based on 179 Google Business Profiles, reveals a pattern that’s hard to ignore. Clicks-to-call are dropping sharply for Jepto-managed law firms.

When AI-powered packs replace traditional listings, the landscape shifts in four critical ways:

  • Shrinking real estate: AI packs often surface only two businesses instead of three.
  • Missing call buttons: Many AI-generated summaries remove instant click-to-call options, adding friction to the customer journey.
  • Different businesses appear: The businesses shown in AI packs often don’t match those in the traditional 3-pack.
  • Accelerated monetization of local search: When paid ads are present, traditional 3-packs increasingly lose direct call and website buttons, reducing organic conversion opportunities.

A fifth issue compounds the problem:

  • Measurement blind spots: Most rank trackers don’t yet report on AI local packs. A business may rank first in a 3-pack that many users never see.

AI local packs surfaced only 32% as many unique businesses as traditional map packs in 2026, according to Sterling Sky. In 88% of the 322 markets analyzed, the total number of visible businesses declined.

At the same time, paid ads continue to take over space once reserved for organic results, signaling a clear shift toward a pay-to-play local landscape.

What Google Business Profile data shows

The same pattern appears, especially in the U.S., where Google is aggressively testing new local formats, according to GMBapi.com data. Traditional local 3-pack impressions are increasingly displaced by:

  • AI-powered local packs.
  • Paid placements inside traditional map packs: Sponsored listings now appear alongside or within the map pack, pushing organic results lower and stripping listings of call and website buttons. This breaks organic customer journeys.
  • Expanded Google Ads units: Including Local Services Ads that consume space once reserved for organic visibility.

Impression trends still fluctuate due to seasonality, market differences, and occasional API anomalies. But a much clearer signal emerges when you look at GBP actions rather than impressions.

Mentions inside AI-generated results are still counted as impressions — even when they no longer drive calls, clicks, or visits.

Some fluctuations are driven by external factors. For example, the June drop ties back to a known Google API issue. Mobile Maps impressions also appear heavily influenced by large advertisers ramping up Google Ads later in the year.

There’s no way to segment these impressions by Google Ads, organic results, or AI Mode.

Even there, however, user behaviour is changing. Interaction rates are declining, with fewer direct actions taken from local listings.

Year-on-year comparisons in the US suggest that while impression losses remain moderate and partially seasonal, GBP actions are disproportionately impacted.

As a counterfactual, data from the Dutch market — where SERP experimentation remains limited — shows far more stable action trends.

The pattern is clear. AI-driven SERP changes, expanding Google Ads, and the removal of call and website buttons from the Map Pack are shrinking organic real estate. Even when visibility looks intact, businesses have fewer chances to earn real user actions.

Local SEO is becoming an eligibility problem

Historically, local optimization centered on familiar ranking factors: proximity, relevance, prominence, reviews, citations, and engagement.

Today, another layer sits above all of them: eligibility.

Many businesses fail to appear in AI-powered local results not because they lack authority, but because Google’s systems decide they aren’t an appropriate match for the specific query context. Research from Yext and insights from practitioners like Claudia Tomina highlight the importance of alignment across three core signals:

  • Business name
  • Primary category
  • Real-world services and positioning

When these fundamentals are misaligned, businesses can be excluded from entire result types — no matter how well optimized the Google Business Profile itself may be.

How to future-proof local visibility

Surviving today’s zero-click reality means moving beyond reliance on a single, perfectly optimized Google Business Profile. Here’s your new local SEO playbook.

The eligibility gatekeeper

Failure to appear in local packs is now driven more by perceived relevance and classification than by links or review volume.

Hyper-local entity authority

AI systems cross-reference Reddit, social platforms, forums, and local directories to judge whether a business is legitimate and active. Inconsistent signals across these ecosystems quietly erode visibility.

Visual trust signals

High-quality, frequently updated photos, and increasingly video, are no longer optional. Google’s AI analyzes visual content to infer services, intent, and categorization.

Embrace the pay-to-play reality

It’s a hard truth, but Google Ads — especially Local Services Ads — are now critical to retaining prominent call buttons that organic listings are losing. A hybrid strategy that blends local SEO with paid search isn’t optional. It’s the baseline.

What this means for local search now

Local SEO is no longer a static directory exercise. Google Business Profiles still anchor local discoverability, but they now operate inside a much broader ecosystem shaped by AI validation, constant SERP experimentation, and Google’s accelerating push to monetize local search.

Discovery no longer hinges on where your GBP ranks against nearby competitors. Search systems — including Google’s AI-driven SERP features and large language models like ChatGPT and Gemini — are increasingly trying to understand what a business actually does, not just where it’s listed.

Success is no longer about being the most “optimized” profile. It’s about being widely verified, consistently active, and contextually relevant across the AI-visible ecosystem.

Our observations show little correlation between businesses that rank well in the traditional Map Pack and those favored by Google’s AI-generated local answers that are beginning to replace it. That gap creates a real opportunity for businesses willing to adapt.

In practice, this means pairing local input with central oversight.

Authentic engagement across multiple platforms, locally differentiated content, and real community signals must coexist with brand governance, data consistency, and operational scale. For single-location businesses with deep community roots, this is an advantage. Being genuinely discussed, recommended, and referenced in your local area — online and offline — gets you halfway there.

For agencies and multi-location brands, the challenge is to balance control with local nuance and ensure trusted signals extend beyond Google (e.g., Apple Maps, Tripadvisor, Yelp, Reddit, and other relevant review ecosystems). The real test is producing locally relevant content and citations at scale without losing authenticity.

Rankings may look stable. But performance increasingly lives somewhere else.

The full data. Local SEO in 2026: Why Your Rankings are Steady but Your Calls are Vanishing

❌
❌