Reading view

Google pushes AI Max tool with in-app ads

Google vs. AI systems visitors

Google is now promoting its own AI features inside Google Ads — a rare move that inserts marketing directly into advertisers’ workflow.

What’s happening. Users are seeing promotional messages for AI Max for Search campaigns when they open campaign settings panels.

  • The notifications appear during routine account audits and updates.
  • It essentially serves as an internal advertisement for Google’s own tooling.

Why we care. The in-platform placement signals Google is pushing to accelerate AI adoption among advertisers, moving from optional rollouts to active promotion. While Google often introduces AI-driven features, promoting them directly within existing workflows marks a more aggressive adoption strategy.

What to watch. Whether this promotional approach expands to other Google Ads features — and how advertisers respond to marketing within their management interface.

First seen. Julie Bacchini, president and founder of Neptune Moon, spotted the notification and shared it on LinkedIn. She wrote: “Nothing like Google Ads essentially running an ad for AI Max in the settings area of a campaign.”

Bing Webmaster Tools officially adds AI Performance report

Microsoft today launched AI Performance in Bing Webmaster Tools in beta. AI Performance lets you see where, and how often, your content is cited in AI-generated answers across Microsoft Copilot, Bing’s AI summaries, and select partner integrations, the company said.

  • AI Performance in Bing Webmaster Tools shows which URLs are cited, which queries trigger those citations, and how citation activity changes over time.
  • Search Engine Land first reported on Jan. 27 that Microsoft was testing the AI Performance report.

What’s new. AI Performance is a new, dedicated dashboard inside Bing Webmaster Tools. It tracks citation visibility across supported AI surfaces. Instead of measuring clicks or rankings, it shows whether your content is used to ground AI-generated answers.

  • Microsoft framed the launch as an early step toward Generative Engine Optimization (GEO) tooling, designed to help publishers understand how their content shows up in AI-driven discovery.

What it looks like. Microsoft shared this image of AI Performance in Bing Webmaster Tools:

What the dashboard shows. The AI Performance dashboard introduces metrics focused specifically on AI citations:

  • Total citations: How many times a site is cited as a source in AI-generated answers during a selected period.
  • Average cited pages: The daily average number of unique URLs from a site referenced across AI experiences.
  • Grounding queries: Sample query phrases AI systems used to retrieve and cite publisher content.
  • Page-level citation activity: Citation counts by URL, highlighting which pages are referenced most often.
  • Visibility trends over time: A timeline view showing how citation activity rises or falls across AI experiences.

These metrics only reflect citation frequency. They don’t indicate ranking, prominence, or how a page contributed to a specific AI answer.

Why we care. It’s good to know where and how your content gets cited, but Bing Webmaster Tools still won’t reveal how those citations translate into clicks, traffic, or any real business outcome. Without click data, publishers still can’t tell if AI visibility delivers value.

How to use it. Microsoft said publishers can use the data to:

  • Confirm which pages are already cited in AI answers.
  • Identify topics that consistently appear across AI-generated responses.
  • Improve clarity, structure, and completeness on indexed pages that are cited less often.

The guidance mirrors familiar best practices: clear headings, evidence-backed claims, current information, and consistent entity representation across formats.

What’s next. Microsoft said it plans to “improve inclusion, attribution, and visibility across both search results and AI experiences,” and continue to “evolve these capabilities.”

Microsoft’s announcement. Introducing AI Performance in Bing Webmaster Tools Public Preview 

How to make automation work for lead gen PPC

B2B advertising faces a distinct challenge: most automation tools weren’t built for lead generation.

Ecommerce campaigns benefit from hundreds of conversions that fuel machine learning. B2B marketers don’t have that luxury. They deal with lower conversion volume, longer sales cycles, and no clear cart value to guide optimization.

The good news? Automation can still work.

Melissa Mackey, Head of Paid Search at Compound Growth Marketing, says the right strategy and signals can turn automation into a powerful driver of B2B leads. Below is a summary of the key insights and recommendations she shared at SMX Next.

The fundamental challenge: Why automation struggles with lead gen

Automation systems are built for ecommerce success, which creates three core obstacles for B2B marketers:

  • Customer journey length: Automation performs best with short journeys. A user visits, buys, and checks out within minutes. B2B journeys can last 18 to 24 months. Offline conversions only look back 90 days, leaving a large gap between early engagement and closed revenue.
  • Conversion volume requirements: Google’s automation works best with about 30 leads per campaign per month. Google says it can function with less, but performance is often inconsistent below that level. Ecommerce campaigns easily hit hundreds of monthly conversions. B2B lead gen rarely does.
  • The cart value problem: In ecommerce, value is instant and obvious. A $10 purchase tells the system something very different than a $100 purchase. Lead generation has no cart. True value often isn’t clear until prospects move through multiple funnel stages — sometimes months later.

The solution: Sending the right signals

Despite these challenges, proven strategies can make automation work for B2B lead generation.

Offline conversions: Your number one priority

Connecting your CRM to Google Ads or Microsoft Ads is essential for making automation work in lead generation. This isn’t optional. It’s the foundation. If you haven’t done this yet, stop and fix it first.

In Google Ads’ Data Manager, you’ll find hundreds of CRM integration options. The most common B2B setups include:

  • HubSpot and Salesforce: Both offer native, seamless integrations with Google Ads. Setup is simple. Once connected, customer stages and CRM data flow directly into the platform.
  • Other CRMs: If you don’t use HubSpot or Salesforce, you can build a custom data table with only the fields you want to share. Use connectors like Snowflake to send that data to Google Ads while protecting user privacy and still supplying strong automation signals.
  • Third-party integrations: If your CRM doesn’t integrate directly, tools like Zapier can connect almost anything to Google Ads. There’s a cost, but the performance gains typically pay for it many times over.

Embrace micro conversions with strategic values

Micro conversions signal intent. They show a “hand raiser” — someone engaged on your site who isn’t an MQL yet but clearly interested.

The key is assigning relative value to these actions, even when you don’t know their exact revenue impact. Use a simple hierarchy to train automation what matters most:

  • Video views (value: 1): Shows curiosity, but qualification is unclear.
  • Ungated asset downloads (value: 10): Indicates stronger engagement and added effort.
  • Form fills (value: 100): Reflects meaningful commitment and willingness to share personal information.
  • Marketing qualified leads (value: 1,000): The highest-value signal and top optimization priority.

This value structure tells automation that one MQL matters more than 999 video views. Without these distinctions, campaigns chase impressive conversion rates driven by low-value actions — while real leads slip through the cracks.

Making Performance Max work for lead generation

You might dismiss Performance Max (PMax) for lead generation — and for good reason. Run it on a basic maximize conversions strategy, and it usually produces junk leads and wastes budget.

But PMax can deliver exceptional results when you combine conversion values and offline conversion data with a Target ROAS bid strategy.

One real client example shows what’s possible. They tracked three offline conversion actions — leads, opportunities, and customers — and valued customers at 50 times a lead. The results were dramatic:

  • Leads increased 150%
  • Opportunities increased 350%
  • Closed deals increased 200%

Closed deals became the campaign’s top-performing metric because they reflected real, paying customers. The key difference? Using conversion values with a Target ROAS strategy instead of basic maximize conversions.

Campaign-specific goals: An underutilized feature

Campaign-specific goals let you optimize campaigns for different conversion actions, giving you far more control and flexibility.

You can set conversion goals at the account level or make them campaign-specific. With campaign-specific goals, you can:

  • Run a mid-funnel campaign optimized only for lead form submissions using informational keywords.
  • Build audiences from those form fills to capture engaged prospects.
  • Launch a separate campaign optimized for qualified leads, targeting that warm audience with higher-value offers like demos or trials.

This approach avoids asking someone to “marry you on the first date.” It also keeps campaigns from competing against themselves by trying to optimize for conflicting goals.

Portfolio bidding: Reaching the data threshold faster

Portfolio bidding groups similar campaigns so you can reach the critical 30-conversions-per-month threshold faster.

For example, four separate campaigns might generate 12, 11, 0, and 15 conversions. On their own, none qualify. Grouped into a single portfolio, they total 38 conversions — giving automation far more data to optimize against.

You may still need separate campaigns for valid reasons — regional reporting, distinct budgets, or operational constraints. Portfolio bidding lets you keep that structure while still feeding the system enough volume to perform.

Bonus benefit: Portfolio bidding lets you set maximum CPCs. This prevents runaway bids when automation aggressively targets high-propensity users. This level of control is otherwise only available through tools like SA360.

First-party audiences: Powerful targeting signals

First-party audiences send strong signals about who you want to reach, which is critical for AI-powered campaigns.

If HubSpot or Salesforce is connected to Google Ads, you can import audiences and use them strategically:

  • Customer lists: Use them as exclusions to avoid paying for existing customers, or as lookalikes in Demand Gen campaigns.
  • Contact lists: Use them for observation to signal ideal audience traits, or for targeting to retarget engaged users.

Audiences make it much easier to trust broad match keywords and AI-driven campaign types like PMax or AI Max — approaches that often feel too loose for B2B without strong audience signals in place.

Leveraging AI for B2B lead generation

AI tools can significantly improve B2B advertising efficiency when you use them with intent. The key is remembering that most AI is trained on consumer behavior, not B2B buying patterns.

The essential B2B prompt addition

Always tell the AI you’re selling to other businesses. Start prompts with clear context, like: “You’re a SaaS company that sells to other businesses.” That single line shifts the AI’s lens away from consumer assumptions and toward B2B realities.

Client onboarding and profile creation

Use AI to build detailed client profiles by feeding it clear inputs, including:

  • What you sell and your core value.
  • Your unique selling propositions.
  • Target personas.
  • Ideal customer profiles.

Create a master template or a custom GPT for each client. This foundation sharpens every downstream AI task and dramatically improves accuracy and relevance.

Competitor research in minutes, not hours

Competitive analysis that once took 20–30 hours can now be done in 10–15 minutes. Ask AI to analyze your competitors and break down:

  • Current offers
  • Positioning and messaging
  • Value propositions
  • Customer sentiment
  • Social proof
  • Pricing strategies

AI delivers clean, well-structured tables you can screenshot for client decks or drop straight into Google Sheets for sorting and filtering. Use this insight to spot gaps, uncover opportunities, and identify clear strategic advantages.

Competitor keyword analysis

Use tools like Semrush or SpyFu to pull competitor keyword lists, then let AI do the heavy lifting. Create a spreadsheet with columns for each competitor’s keywords alongside your client’s keywords. Then ask the AI to:

  • Identify keywords competitors rank for that you don’t to uncover gaps to fill.
  • Identify keywords you own that competitors don’t to surface unique advantages.
  • Group keywords by theme to reveal patterns and inform campaign structure.

What once took hours of pivot tables, filtering, and manual cleanup now takes AI about five minutes.

Automating routine tasks

  • Negative keyword review: Create an AI artifact that learns your filtering rules and decision logic. Feed it search query reports, and it returns clear add-or-ignore recommendations. You spend time reviewing decisions instead of doing first-pass analysis, which makes SQR reviews faster and easier to run more often.
  • Ad copy generation: Tools like RSA generators can produce headlines and descriptions from sample keywords and destination URLs. Pair them with your custom client GPT for even stronger starting points. Always review AI-generated copy, but refining solid drafts is far faster than writing from scratch.

Experiments: testing what works

The Experiments feature is widely underused. Put it to work by testing:

  • Different bid strategies, including portfolio vs. standard
  • Match types
  • Landing pages
  • Campaign structures

Google Ads automatically reports performance, so there’s no manual math. It even includes insight summaries that tell you what to do next — apply the changes, end the experiment, or run a follow-up test.

Solutions: Pre-built scripts made easy

Solutions are prebuilt Google Ads scripts that automate common tasks, including:

  • Reporting and dashboards
  • Anomaly detection
  • Link checking
  • Flexible budgeting
  • Negative keyword list creation

Instead of hunting down scripts and pasting code, you answer a few setup questions and the solution runs automatically. Use caution with complex enterprise accounts, but for simpler structures, these tools can save a significant amount of time.

Key takeaways

Automation wasn’t built for lead generation, but with the right strategy, you can still make it work for B2B.

  • Send the right signals: Offline conversions with assigned values aren’t optional. First-party audiences add critical targeting context. Together, these signals make AI-driven campaigns work for B2B.
  • AI is your friend: Use AI to automate repetitive work — not to replace people. Take 50 search query reports off your team’s plate so they can focus on strategy instead of tedious analysis.
  • Leverage platform tools: Experiments, Solutions, campaign-specific goals, and portfolio bidding are powerful features many advertisers ignore. Use what’s already built into your ad platforms to get more out of every campaign.

Watch: It’s time to embrace automation for B2B lead gen 

💾

Automation isn’t just for ecommerce. Learn how to drive more leads, cut costs, improve quality, and save time with AI-powered campaigns.

Why governance maturity is a competitive advantage for SEO

How SEO governance shifts teams from reaction to prevention

Let me guess: you just spent three months building a perfectly optimized product taxonomy, complete with schema markup, internal linking, and killer metadata. 

Then, the product team decided to launch a site redesign without telling you. Now half your URLs are broken, the new templates strip out your structured data, and your boss is asking why organic traffic dropped 40%.

Sound familiar?

Here’s the thing: this isn’t an SEO failure, but a governance failure. It’s costing you nights and weekends trying to fix problems that should never have happened in the first place.

This article covers why weak governance keeps breaking SEO, how AI has raised the stakes, and how a visibility governance maturity model helps SEO teams move from firefighting to prevention.

Governance isn’t bureaucracy – it’s your insurance policy

I know what you’re thinking. “Great, another framework that means more meetings and approval forms.” But hear me out.

The Visibility Governance Maturity Model (VGMM) isn’t about creating red tape. It’s about establishing clear ownership, documented processes, and decision rights that prevent your work from being accidentally destroyed by teams who don’t understand SEO.

Think of it this way: VGMM is the difference between being the person who gets blamed when organic traffic tanks versus being the person who can point to documentation showing exactly where the process broke down – and who approved skipping the SEO review.

This maturity model:

  • Protects your work from being undone by releases you weren’t consulted on.
  • Documents your standards so you’re not explaining canonical tags for the 47th time.
  • Establishes clear ownership so you’re not expected to fix everything across six different teams.
  • Gets you a seat at the table when decisions affecting SEO are being made.
  • Makes your expertise visible to leadership in ways they understand.

The real problem: AI just made everything harder

Remember when SEO was mostly about your website and Google? Those were simpler times.

Now you’re trying to optimize for:

  • AI Overviews that rewrite your content.
  • ChatGPT citations that may or may not link back.
  • Perplexity summaries that pull from competitors.
  • Voice assistants that only cite one source.
  • Knowledge panels that conflict with your site.

And you’re still dealing with:

  • Content teams who write AI-generated fluff.
  • Developers who don’t understand crawl budget.
  • Product managers who launch features that break structured data.
  • Marketing directors who want “just one small change” that tanks rankings.

Without governance, you’re the only person who understands how all these pieces fit together. 

When something breaks, everyone expects you to fix it – usually yesterday. When traffic is up, it’s because marketing ran a great campaign. When it’s down, it’s your fault.

You become the hero the organization depends on, which sounds great until you realize you can never take a real vacation, and you’re working 60-hour weeks.

Dig deeper: Why most SEO failures are organizational, not technical

What VGMM actually measures – in terms you care about

VGMM doesn’t care about your keyword rankings or whether you have perfect schema markup. It evaluates whether your organization is set up to sustain SEO performance without burning you out. Below are the five maturity levels that translate to your daily reality:

Level 1: Unmanaged (your current nightmare)

  • Nobody knows who’s responsible for SEO decisions.
  • Changes happen without SEO review.
  • You discover problems after they’ve tanked traffic.
  • You’re constantly firefighting.
  • Documentation doesn’t exist or is ignored.

Level 2: Aware (slightly better)

  • Leadership admits SEO matters.
  • Some standards exist but aren’t enforced.
  • You have allies but no authority.
  • Improvements happen but get reversed next quarter.
  • You’re still the only one who really gets it.

Level 3: Defined (getting somewhere)

  • SEO ownership is documented.
  • Standards exist, and some teams follow them.
  • You’re consulted before major changes.
  • QA checkpoints include SEO review.
  • You’re working normal hours most weeks.

Level 4: Integrated (the dream)

  • SEO is built into release workflows.
  • Automated checks catch problems before they ship.
  • Cross-functional teams share accountability.
  • You can actually take a vacation without a disaster.
  • Your expertise is respected and resourced.

Level 5: Sustained (unicorn territory)

  • SEO survives leadership changes.
  • Governance adapts to new AI surfaces automatically.
  • Problems are caught before they impact traffic.
  • You’re doing strategic work, not firefighting.
  • The organization values prevention over reaction.

Most organizations sit at Level 1 or 2. That’s not your fault – it’s a structural problem that VGMM helps diagnose and fix.

Dig deeper: SEO’s future isn’t content. It’s governance

How VGMM works: The less boring explanation

VGMM coordinates multiple domain-specific maturity models. Think of it as a health checkup that looks at all your vital signs, not just one metric.

It evaluates maturity across domains like:

  • SEO governance: Your core competency.
  • Content governance: Are writers following standards?
  • Performance governance: Is the site actually fast?
  • Accessibility governance: Is the site inclusive?
  • Workflow governance: Do processes exist and work?

Each domain gets scored independently, then VGMM looks at how they work together. Because excellent SEO maturity doesn’t matter if the performance team deploys code that breaks the site every Tuesday or if the content team publishes AI-generated nonsense that tanks your E-E-A-T signals.

VGMM produces a 0–100% score based on:

  • Domain scores: How mature is each area?
  • Weighting: Which domains matter most for your business?
  • Dependencies: Are weaknesses in one area breaking strengths in another?
  • Coherence: Do decision rights and accountability actually align?

The final score isn’t about effort – it’s about whether governance actually works.

Get the newsletter search marketers rely on.


What this means for your daily life

Before VGMM-style governance:

  • Product launches a redesign → You find out when traffic drops.
  • Content team uses AI → You discover thin content in Search Console.
  • Dev changes URL structure → You spend a week fixing redirects.
  • Marketing wants “quick changes” → You explain why it’s not quick (again).
  • Site goes down → Everyone asks why you didn’t catch it.

After governance maturity improves:

  • Product can’t launch without SEO sign-off.
  • Content AI usage has review checkpoints.
  • URL changes require documented SEO approval.
  • Marketing requests go through defined workflows.
  • Site monitoring includes automated SEO health checks.

You move from reactive firefighting to proactive prevention. Your weekends become yours again.

The supporting models: What they actually check

VGMM doesn’t score you on technical SEO execution. It checks whether the organization has processes in place to prevent SEO disasters.

SEO Governance Maturity Model (SEOGMM) asks:

  • Are there documented SEO standards?
  • Who can override them, and how?
  • Do templates enforce SEO requirements?
  • Are there QA checkpoints before releases?
  • Can SEO block launches that will cause problems?

Content Governance Maturity Model (CGMM) asks:

  • Are content quality standards documented?
  • Is AI-generated content reviewed?
  • Are writers trained on SEO basics?
  • Is there a process for updating outdated content?

Website Performance Maturity Model (WPMM) asks:

  • Are Core Web Vitals monitored?
  • Can releases be rolled back if they break performance?
  • Is there a performance budget?
  • Are third-party scripts governed?

You get the idea. Each domain has its own checklist, and VGMM shows leadership where gaps create risk.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

How to pitch this to your boss

You don’t need to explain VGMM theory. You need to connect it to problems leadership already knows exist.

  • Frame it as risk reduction: “We’ve had three major traffic drops this year from changes that SEO didn’t review. VGMM helps us identify where our process breaks down so we can prevent this.”
  • Frame it as efficiency: “I’m spending 60% of my time firefighting problems that could have been prevented. VGMM establishes processes so I can focus on growth opportunities instead.”
  • Frame it as a competitive advantage: “Our competitors are getting cited in AI Overviews, and we’re not. VGMM evaluates whether we have the governance structure to compete in AI-mediated search.”
  • Frame it as scalability: “Right now, our SEO capability depends entirely on me. If I get hit by a bus tomorrow, nobody knows how to maintain what we’ve built. VGMM establishes documentation and processes that make our SEO sustainable.”
  • The ask: “I’d like to conduct a VGMM assessment to identify where our processes need strengthening.”

What success actually looks like

Organizations with higher VGMM maturity experience measurably better outcomes:

  • Fewer unexplained traffic drops because changes are reviewed.
  • More stable AI citations because content quality is governed.
  • Less rework after launches because SEO is built into workflows.
  • Clearer accountability because ownership is documented.
  • Better resource allocation because gaps are visible to leadership.

But the real win for you personally: 

  • You stop being the hero who saves the day and become the strategist who prevents disasters. 
  • Your expertise is recognized and properly resourced. 
  • You can take actual vacations. 
  • You work normal hours most of the time.

Your job becomes about building and improving, not constantly fixing.

Getting started: Practical next steps

Step 1: Self-assessment

Look at the five maturity levels. Where is your organization honestly sitting? If you’re at Level 1 or 2, you have evidence for why governance matters.

Step 2: Document current-state pain

Make a list of the last six months of SEO incidents:

  • Changes that weren’t reviewed.
  • Traffic drops from preventable problems.
  • Time spent fixing avoidable issues.
  • Requests that had to be explained multiple times.

This becomes your business case.

Step 3: Start with one domain

You don’t need to implement full VGMM immediately. Start with SEOGMM:

  • Document your standards.
  • Create a review checklist.
  • Establish who can approve exceptions.
  • Get stakeholder sign-off on the process.

Step 4: Show results 

Track prevented problems. When you catch an issue before it ships, document it. When a process prevents a regression, quantify the impact. Build your case for expanding governance.

Step 5: Expand systematically

Once SEOGMM is working, expand to related domains (content, performance, accessibility). Show how integrated governance catches problems that individual domain checks miss.

Why governance determines whether SEO survives

Governance isn’t about making your job harder. It’s about making your organization work better so your job becomes sustainable.

VGMM gives you a framework for diagnosing why SEO keeps getting undermined by other teams and a roadmap for fixing it. It translates your expertise into language that leadership understands. It protects your work from accidental destruction.

Most importantly, it moves you from being the person who’s always fixing emergencies to being the person who builds systems that prevent them.

You didn’t become an SEO professional to spend your career firefighting. VGMM helps you get back to doing the work that actually matters – the strategic, creative, growth-focused work that attracted you to SEO in the first place.

If you’re tired of watching your best work get undone by teams who don’t understand SEO, if you’re exhausted from being the only person who knows how everything works, if you want your expertise to be recognized and protected – start the VGMM conversation with your leadership.

The framework exists. What’s missing is someone in your organization saying, “We need to govern visibility like we govern everything else that matters.”

That someone is you.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Why PPC measurement feels broken (and why it isn’t)

Why PPC measurement works differently in a privacy-first world

If you’ve been managing PPC accounts for any length of time, you don’t need a research report to tell you something has changed. 

You see it in the day-to-day work: 

  • GCLIDs missing from URLs.
  • Conversions arriving later than expected.
  • Reports that take longer to explain while still feeling less definitive than they used to.

When that happens, the reflex is to assume something broke – a tracking update, a platform change, or a misconfiguration buried somewhere in the stack.

But the reality is usually simpler. Many measurement setups still assume identifiers will reliably persist from click to conversion, and that assumption no longer holds consistently.

Measurement hasn’t stopped working. The conditions it depends on have been shifting for years, and what once felt like edge cases now show up often enough to feel like a systemic change.

Why this shift feels so disorienting

I’ve been close to this problem for most of my career. 

Before Google Ads had native conversion tracking, I built my own tracking pixels and URL parameters to optimize affiliate campaigns. 

Later, while working at Google, I was involved in the acquisition of Urchin as the industry moved toward standardized, comprehensive measurement.

That era set expectations that nearly everything could be tracked, joined, and attributed at the click level. Google made advertising feel measurable, controllable, and predictable. 

As the ecosystem now shifts toward more automation, less control, and less data, that contrast can be jarring.

It has been for me. Much of what I once relied on to interpret PPC data no longer applies in the same way. 

Making sense of today’s measurement environment requires rethinking those assumptions, not trying to restore the old ones. This is how I think about it now.

Dig deeper: How to evolve your PPC measurement strategy for a privacy-first future

The old world: click IDs and deterministic matching

For many years, Google Ads measurement followed a predictable pattern. 

  • A user clicked an ad. 
  • A click ID, or gclid, was appended to the URL. 
  • The site stored it in a cookie. 
  • When a conversion fired, that identifier was sent back and matched to the click.

This produced deterministic matches, supported offline conversion imports, and made attribution relatively easy to explain to stakeholders. 

As long as the identifier survived the journey, the system behaved in ways most advertisers could reason about. 

We could literally see what happened with each click and which ones led to individual conversions.

That reliability depended on a specific set of conditions.

  • Browsers needed to allow parameters through. 
  • Cookies had to persist long enough to cover the conversion window. 
  • Users had to accept tracking by default. 

Luckily, those conditions were common enough that the model worked really well.

Why that model breaks more often now

Browsers now impose tighter limits on how identifiers are stored and passed.

Apple’s Intelligent Tracking Prevention, enhanced tracking protection, private browsing modes, and consent requirements all reduce how long tracking data persists, or whether it’s stored at all.

URL parameters may be stripped before a page loads. Cookies set via JavaScript may expire quickly. Consent banners may block storage entirely.

Click IDs sometimes never reach the site, or they disappear before a conversion occurs.

This is expected behavior in modern browser environments, not an edge case, so we have to account for it.

Trying to restore deterministic click-level tracking usually means working against the constant push toward more privacy and the resulting browser behaviors.

This is another of the many evolutions of online advertising we simply have to get on board with, and I’ve found that designing systems to function with partial data beats fighting the tide.

The adjustment isn’t just technical

On my own team, GA4 is a frequent source of frustration. Not because it’s incapable, but because it’s built for a world where some data will always be missing. 

We hear the same from other advertisers: the data isn’t necessarily wrong, but it’s harder to reason about.

This is the bigger challenge. Moving from a world where nearly everything was observable to one where some things are inferred requires accepting that measurement now operates under different conditions. 

That mindset shift has been uneven across the industry because measurement lives at the periphery of where many advertisers spend most of their time, working in ad platforms.

A lot of effort goes into optimizing ad platform settings when sometimes the better use of time might’ve been fixing broken data so better decisions could be made.

Dig deeper: Advanced analytics techniques to measure PPC

Get the newsletter search marketers rely on.


What still works: Client-side and server-side approaches

So what approaches hold up under current constraints? The answer involves both client-side and server-side measurement.

Pixels still matter, but they have limits

Client-side pixels, like the Google tag, continue to collect useful data.

They fire immediately, capture on-site actions, and provide fast feedback to ad platforms, whose automated bidding systems rely on this data.

But these pixels are constrained by the browser. Scripts can be blocked, execution can fail and consent settings can prevent storage. A portion of traffic will never be observable at the individual level.

When pixel tracking is the only measurement input, these gaps affect both reporting and optimization. Pixels haven’t stopped working. They just no longer cover every case.

Changing how pixels are delivered

Some responses to declining pixel data focus on the mechanics of how pixels are served rather than measurement logic.

Google Tag Gateway changes where tag requests are routed, sending them through a first-party, same-origin setup instead of directly to third-party domains.

This can reduce failures caused by blocked scripts and simplify deployment for teams using Google Cloud.

What it doesn’t do is define events, decide what data is collected, or correct poor tagging choices. It improves delivery reliability, not measurement logic.

This distinction matters when comparing Tag Gateway and server-side GTM.

  • Tag Gateway focuses on routing and ease of setup.
  • Server-side GTM enables event processing, enrichment, and governance. It requires more maintenance and technical oversight, but it provides more control.

The two address different problems.

Here’s the key point: better infrastructure affects how data moves, not what it means.

Event definitions, conversion logic, and consistency across systems still determine data quality.

A reliable pipeline delivers whatever it’s given, so it’d be just as good at making sure the garbage you put in also comes back out.

Offline conversion imports: Moving measurement off the browser

Offline conversion imports take a different approach, moving measurement away from the browser entirely. Conversions are recorded in backend systems and sent to Google Ads after the fact.

Because this process is server to server, it’s less affected by browser privacy restrictions. It works for longer sales cycles, delayed purchases, and conversions that happen outside the site. 

This is why Google commonly recommends running offline imports alongside pixel-based tracking. The two cover different parts of the journey. One is immediate, the other persists.

Offline imports also align with current privacy constraints. They rely on data users provide directly, such as email addresses during a transaction or signup.

The data is processed server-side and aggregated, reducing reliance on browser identifiers and short-lived cookies.

Offline imports don’t replace pixels. They reduce dependence on them.

Dig deeper: Offline conversion tracking: 7 best practices and testing strategies

How Google fills the gaps

Even with pixels and offline imports working together, some conversions can’t be directly observed.

Matching when click IDs are missing

When click IDs are unavailable, Google Ads can still match conversions using other inputs.

This often begins with deterministic matching through hashed first-party identifiers such as email addresses, when those identifiers can be associated with signed-in Google users.

This is what Enhanced Conversions help achieve.

When deterministic matching, if this then that, isn’t possible, the system relies on aggregated and validated signals rather than reconstructing individual click paths.

These can include session-level attributes and limited, privacy-safe IP information, combined with timing and contextual constraints.

This doesn’t recreate the old click-level model, but it allows conversions to be associated with prior ad interactions at an aggregate level.

One thing I’ve noticed: adding these inputs typically improves matching before it affects bidding.

Bidding systems account for conversion lag and validate new signals over time, which means imported or modeled conversions may appear in reporting before they’re fully weighted in optimization.

Matching, attribution, and bidding are related but separate processes. Improvements in one don’t immediately change the others.

Modeled conversions as a standard input

Modeled conversions are now a standard part of Google Ads and GA4 reporting.

They’re used when direct observation isn’t possible, such as when consent is denied or identifiers are unavailable.

These models are constrained by available data and validated through consistency checks and holdback experiments.

When confidence is low, modeling may be limited or not applied. Modeled data should be treated as an expected component of measurement rather than an exception.

Dig deeper: Google Ads pushes richer conversion imports

Boundaries still matter

Tools like Google Tag Gateway or Enhanced Conversions for Leads help recover measurement signal, but they don’t override user intent. 

Routing data through a first-party domain doesn’t imply consent. Ad blockers and restrictive browser settings are explicit signals. 

Overriding them may slightly increase the measured volume, but it doesn’t align with users’ expectations regarding how your organization uses their data.

Legal compliance and user intent aren’t the same thing. Measurement systems can respect both, but doing so requires deliberate choices.

Designing for partial data

Missing signals are normal. Measurement systems that assume full visibility will continue to break under current conditions.

Redundancy helps: pixels paired with hardened delivery, offline imports paired with enhanced identifiers, and multiple incomplete signals instead of a single complete one.

But here’s where things get interesting. Different systems will see different things, and this creates a tension many advertisers now face daily.

Some clients tell us their CRM data points clearly in one direction, while Google Ads automation, operating on less complete inputs, nudges campaigns another way.

In most cases, neither system is wrong. They’re answering different questions with different data, on different timelines. Operating in a world of partial observability means accounting for that tension rather than trying to eliminate it.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Making peace with partial observability

The shift toward privacy-first measurement changes how much of the user journey can be directly observed. That changes our jobs.

The goal is no longer perfect reconstruction of every click, but building measurement systems that remain useful when signals are missing, delayed, or inferred.

Different systems will continue to operate with different views of reality, and alignment comes from understanding those differences rather than trying to eliminate them.

In this environment, durable measurement depends less on recovering lost identifiers and more on thoughtful data design, redundancy, and human judgment.

Measurement is becoming more strategic than ever.

How SEO leaders can explain agentic AI to ecommerce executives

How to communicate agentic AI to ecommerce leadership without the hype

Agentic AI is increasingly appearing in leadership conversations, often accompanied by big claims and unclear expectations. For SEO leaders working with ecommerce brands, this creates a familiar challenge.

Executives hear about autonomous agents, automated purchasing, and AI-led decisions, and they want to know what this really means for growth, risk, and competitiveness.

What they don’t need is more hype. They need clear explanations, grounded thinking, and practical guidance. 

This is where SEO leaders can add real value, not by predicting the future, but by helping leadership understand what is changing, what isn’t, and how to respond without overreacting. Here’s how.

Start by explaining what ‘agentic’ actually means

A useful first step is to remove the mystery from the term itself. Agentic systems don’t replace customers, they act on behalf of customers. The intent, preferences, and constraints still come from a person.

What changes is who does the work.

Discovery, comparison, filtering, and sometimes execution are handled by software that can move faster and process more information than a human can.

When speaking to executive teams, a simple framing works best:

  • “We’re not losing customers, we’re adding a new decision-maker into the journey. That decision-maker is software acting as a proxy for the customer.” 

Once this is clear, the conversation becomes calmer and more practical, and the focus moves away from fear and toward preparation.

Keep expectations realistic and avoid the hype

Another important role for SEO leaders is to slow the conversation down. Agentic behavior will not arrive everywhere at the same time. Its impact will be uneven and gradual.

Some categories will see change earlier because their products are standardized and data is already well structured. Others will move more slowly because trust, complexity, or regulation makes automation harder.

This matters because leadership teams often fall into one of two traps:

  1. Panic, where plans are rewritten too quickly, budgets move too fast, and teams chase futures that may still be some distance away. 
  2. Dismissal, where nothing changes until performance clearly drops, and by then the response is rushed.

SEO leaders can offer a steadier view. Agentic AI accelerates trends that already exist. Personalized discovery, fewer visible clicks, and more pressure on data quality are not new problems. 

Agents simply make them more obvious. Seen this way, agentic AI becomes a reason to improve foundations, not a reason to chase novelty.

Dig deeper: Are we ready for the agentic web?

Change the conversation from rankings to eligibility

One of the most helpful shifts in executive conversations is moving away from rankings as the main outcome of SEO. In an agent-led journey, the key question isn’t “do we rank well?” but “are we eligible to be chosen at all?”

Eligibility depends on clarity, consistency, and trust. An agent needs to understand what you sell, who it is for, how much it costs, whether it is available, and how risky it is to choose you on behalf of a user. This is a strong way to connect SEO to commercial reality.

Questions worth raising include whether product information is consistent across systems, whether pricing and availability are reliable, and whether policies reduce uncertainty or create it. Framed this way, SEO becomes less about chasing traffic and more about making the business easy to select.

Explain why SEO no longer sits only in marketing

Many executives still see SEO as a marketing channel, but agentic behavior challenges that view.

Selection by an agent depends on factors that sit well beyond marketing. Data quality, technical reliability, stock accuracy, delivery performance, and payment confidence all play a role.

SEO leaders should be clear about this. This isn’t about writing more content. It’s about making sure the business is understandable, reliable, and usable by machines.

Positioned correctly, SEO becomes a connecting function that helps leadership see where gaps in systems or data could prevent the brand from being selected. This often resonates because it links SEO to risk and operational health, not just growth.

Dig deeper: How to integrate SEO into your broader marketing strategy

Be clear that discovery will change first

For most ecommerce brands, the earliest impact of agentic systems will be at the top of the funnel. Discovery becomes more conversational and more personal.

Users describe situations, needs, and constraints instead of typing short search phrases, and the agent then turns that context into actions.

This reduces the value of simply owning category head terms. If an agent knows a user’s budget, preferences, delivery expectations, and past behavior, it doesn’t behave like a first-time visitor. It behaves like a well-informed repeat customer.

This creates a reporting challenge. Some SEO work will no longer look like direct demand creation, even though it still influences outcomes. Leadership teams need to be prepared for this shift.

Get the newsletter search marketers rely on.


Reframe consideration as filtering, not persuasion

The middle of the funnel also changes shape. Today, consideration often involves reading reviews, comparing options, and seeking reassurance.

In an agent-led journey, consideration becomes a filtering process, where the agent removes options it believes the user would reject and keeps those that fit.

This has clear implications. Generic content becomes less effective as a traffic driver because agents can generate summaries and comparisons instantly. Trust signals become structural, meaning claims need to be backed by consistent and verifiable information.

In many cases, a brand may be chosen without the user being consciously aware of it. That can be positive for conversion, but risky for long-term brand strength if recognition isn’t built elsewhere.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

Set honest expectations about measurement

Executives care about measurement, and agentic AI makes this harder. As more discovery and consideration happen inside AI systems, fewer interactions leave clean attribution trails. Some impact will show up as direct traffic, and some will not be visible at all.

SEO leaders should address this early. This isn’t a failure of optimization. It reflects the limits of today’s analytics in a more mediated world.

The conversation should move toward directional signals and blended performance views, rather than precise channel attribution that no longer reflects how decisions are made.

Promote a proactive, low-risk response

The most important part of the leadership discussion is what to do next. The good news is that most sensible responses to agentic AI are low risk.

Improving product data quality, reducing inconsistencies across platforms, strengthening reliability signals, and fixing technical weaknesses all help today, regardless of how quickly agents mature.

Investing in brand demand outside search also matters. If agents handle more of the comparison work, brands that users already trust by name are more likely to be selected.

This reassures leaders that action doesn’t require dramatic change, only disciplined improvement.

Agentic AI changes the focus, not the fundamentals

For SEO leaders, agentic AI changes the focus of the role. The work shifts from optimizing pages to protecting eligibility, from chasing visibility to reducing ambiguity, and from reporting clicks to explaining influence.

This requires confidence, clear communication, and a willingness to challenge hype. Agentic AI makes SEO more strategic, not any less important.

Agentic AI should not be treated as an immediate threat or a guaranteed advantage. It’s a shift in how decisions are made.

For ecommerce brands, the winners will be those that stay calm, communicate clearly, and adapt their SEO thinking from driving clicks to earning selection.

That is the conversation SEO leaders should be having now.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

What repeated ChatGPT runs reveal about brand visibility

What repeated ChatGPT runs reveal about brand visibility

We know AI responses are probabilistic – if you ask an AI the same question 10 times, you’ll get 10 different responses.

But how different are the responses?

That’s the question Rand Fishkin explored in some interesting research.

And it has big implications for how we should think about tracking AI visibility for brands.

In his research, he tested prompts asking for recommendations in all sorts of products and services, including everything from chef’s knives to cancer care hospitals and Volvo dealerships in Los Angeles.

Basically, he found that:

  • AIs rarely recommend the same list of brands in the same order twice.
  • For a given topic (e.g., running shoes), AIs recommend a certain handful of brands far more frequently than others.

For my research, as always, I’m focusing exclusively on B2B use cases. Plus, I’m building on Fishkin’s work by addressing these additional questions:

  • Does prompt complexity affect the consistency of AI recommendations?
  • Does the competitiveness of the category affect the consistency of recommendations?

Methodology

To explore those questions, I first designed 12 prompts:

  • Competitive vs. niche: Six of the prompts are about highly competitive B2B software categories (e.g., accounting software), and the other six are about less crowded categories (e.g., user entity behavior analytics (UEBA) software). I identified the categories using Contender’s database, which tracks how many brands ChatGPT associates with 1,775 different software categories.
  • Simple vs. nuanced prompts: Within both sets of “competitive” and “niche” prompts, half of the prompts are simple (“What’s the best accounting software?”) and the other half are nuanced prompts including a persona and use case (”For a Head of Finance focused on ensuring financial reporting accuracy and compliance, what’s the best accounting software?”)

I ran the 12 prompts 100 times, each, through the logged-out, free version of ChatGPT at chatgpt.com (i.e., not the API). I used a different IP address for each of the 1,200 interactions to simulate 1,200 different users starting new conversations.

Limitations: This research only covers responses from ChatGPT. But given the patterns in Fishkin’s results and the similar probabilistic nature of LLMs, you can probably generalize the directional (not absolute value) findings below to most/all AIs.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Findings

So what happens when 100 different people submit the same prompt to ChatGPT, asking for product recommendations?

How many ‘open slots’ in ChatGPT responses are available to brands?

On average, ChatGPT will mention 44 brands across 100 different responses. But one of the response sets included as many as 95 brands – it really depends on the category.

How many brands does ChatGPT draw from, on average?

Competitive vs. niche categories

On that note, for prompts covering competitive categories, ChatGPT mentions about twice as many brands per 100 responses compared to the responses to prompts covering “niche” categories. (This lines up with the criteria I used to select the categories I studied.)

Simple vs. nuanced prompts

On average, ChatGPT mentioned slightly fewer brands in response to nuanced prompts. But this wasn’t a consistent pattern – for any given software category, sometimes nuanced questions ended up with more brands mentioned, and sometimes simple questions did.

This was a bit surprising, since I expected more specific requests (e.g., “For a SOC analyst needing to triage security alerts from endpoints efficiently, what’s the best EDR software?”) to consistently yield a narrower set of potential solutions from ChatGPT.

I think ChatGPT might not be better at tailoring a list of solutions to a specific use case because it doesn’t have a deep understanding of most brands. (More on this data in an upcoming note.)

Return of the ’10 blue links’

In each individual response, ChatGPT will, on average, mention only 10 brands.

There’s quite a range, though – a minimum of 6 brands per response and a maximum of 15 when averaging across response sets.

How many brands per response, on average?

But a single response typically names about 10 brands regardless of category or prompt type.

The big difference is in how much the pool of brands rotates across responses – competitive categories draw from a much deeper bench, even though each individual response names a similar count.

Everything old (in SEO) truly is new again (in GEO/AEO). It reminds me of trying to get a placement in one of Google’s “10 blue links”.

Dig deeper: How to measure your AI search brand visibility and prove business impact

Get the newsletter search marketers rely on.


How consistent are ChatGPT’s brand recommendations?

When you ask ChatGPT for a B2B software recommendation 100 different times, there are only ~5 brands, on average, that it’ll mention 80%+ of the time.

To put it in context, that’s just 11% of all the 44 brands it’ll mention at all across those 100 responses.

ChatGPT knows ~44 brands in your category

So it’s quite competitive to become one of the brands ChatGPT consistently mentions whenever someone asks for recommendations in your category.

As you’d expect, these “dominant” brands tend to be big, established brands with strong recognition. For example, the dominant brands in the accounting software category are QuickBooks, Xero, Wave, FreshBooks, Zoho, and Sage.

If you’re not a big brand, you’re better off being in a niche category:

It's easier to get good AI visibility in niche categories

When you operate in a niche category, not only are you literally competing with fewer companies, but there are also more “open slots” available to you to become a dominant brand in ChatGPT’s responses.

In niche categories, 21% of all the brands ChatGPT mentions are dominant brands, getting mentioned 80%+ of the time.

Compare this to just 7% of all brands being dominant in competitive categories, where the majority of brands (72%) are languishing in the long tail, getting mentioned less than 20% of the time.

The responses to nuanced prompts are harded to dominate

A nuanced prompt doesn’t dramatically change the long tail of little-seen brands (with <20% visibility), but it does change the “winner’s circle.” Adding persona context to a prompt makes it a bit more difficult to reach the dominant tier – you can see the steeper “cliff” a brand has to climb in the “nuanced prompts” graph above.

This makes intuitive sense: when someone asks “best accounting software for a Head of Finance,” ChatGPT has a more specific answer in mind and commits a bit more strongly to fewer top picks.

Still, it’s worth noting that the overall pool doesn’t shrink much – ChatGPT mentions ~42 brands in 100 responses to nuanced prompts, just a handful fewer than the ~46 mentioned in response to simple prompts. If nuanced prompts make the winner’s circle a bit more exclusive, why don’t they also narrow the total field?

Partly, it could be that the “nuanced” questions we fed it weren’t meaningfully more narrow and specific than what was implied in the simple questions we asked.

But, based on other data I’m seeing, I think this is partly about ChatGPT not knowing enough about most brands to be more selective. I’ll share more on this in an upcoming note.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

What does this mean for B2B marketers?

If you’re not a dominant brand, pick your battles – niche down

It’s never been more important to differentiate. 21% of mentioned brands reach dominant status in niche categories vs. 7% in competitive ones.

Without time and a lot of money for brand marketing, an upstart tech company isn’t going to become a dominant brand in a broad, established category like accounting software.

But the field is less competitive when you lean into your unique, differentiating strengths. ChatGPT is more likely to treat you like a dominant brand if you work to make your product known as “the best accounting software for commercial real estate companies in North America.”

Most AI visibility tracking tools are grossly misleading

Given the inconsistency of ChatGPT’s recommendations, a single spot-check for any given prompt is nearly meaningless. Unfortunately, checking each prompt just once per time period is exactly what most AI visibility tracking tools do.

If you want anything approaching a statistically-significant visibility score for any given prompt, you need to run the prompt at least dozens of times, even 100+ times, depending on how precise you need the data to be.

But that’s obviously not practical for most people, so my suggestion is: For the key, bottom-of-funnel prompts you’re tracking, run them each ~5 times whenever you pull data.

That’ll at least give you a reasonable sense of whether your brand tends to show up most of the time, some of the time, or never.

Your goal should be to have a confident sense of whether your brand is in the little-seen long tail, the visible middle, or the dominant top-tier for any given prompt. Whether you use my tiers of ‘under 20%’, ‘20–80%’, and ‘80%+’, or your own thresholds, this is the approach that follows the data and common sense.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What’s next?

In future newsletters and LinkedIn posts, I’m going to build on these findings with new research:

  • How does ChatGPT talk about the brands it consistently recommends? Is it indicative of how much ChatGPT “knows” about brands?
  • Do different prompts with the same search intent tend to produce the same set of recommendations?
  • How consistent is “rank” in the responses? Do dominant brands tend to get mentioned first?

This article was originally published on Visible on beehiiv (as Most AI visibility tracking is misleading (here’s my new data)) and is republished with permission.

Reddit says 80 million people now use its search weekly

Reddit search

Eighty million people use Reddit search every week, Reddit said on its Q4 2025 earnings call last week. The increase followed a major change: Reddit merged its core search with its AI-powered Reddit Answers and began positioning the platform as a place where users can start — and finish — their searches.

  • Executives framed the move as a response to changing behavior. People are increasingly researching products and making decisions by asking questions within communities rather than relying solely on traditional search engines.
  • Reddit is betting it can keep more of that intent on-platform, rather than acting mainly as a source of links for elsewhere.

Why we care. Reddit is becoming a place where people start — and complete — their searches without ever touching Google. For brands, that means visibility on Reddit now matters as much as ranking in traditional and AI search for many buying decisions.

Reddit’s search ambitions. CEO Steve Huffman said Reddit made “significant progress” in Q4 by unifying keyword search with Reddit Answers, its AI-driven Q&A experience. Users can now move between standard search results and AI answers in a single interface, with Answers also appearing directly inside search results.

  • “Reddit is already where people go to find things,” Huffman said, adding the company wants to become an “end-to-end search destination.”
  • More than 80 million people searched Reddit weekly in Q4, up from 60 million a year earlier, as users increasingly come to the platform to research topics — not just scroll feeds or click through from Google.

Reddit Answers is growing. Reddit Answers is driving much of that growth. Huffman said Answers queries jumped from about 1 million a year ago to 15 million in Q4, while overall search usage rose sharply in parallel.

  • He said Answers performs best for open-ended questions—what to buy, watch, or try—where people want multiple perspectives instead of a single factual answer. Those queries align naturally with Reddit’s community-driven discussions.
  • Reddit is also expanding Answers beyond text. Huffman said the company is piloting “dynamic agentic search results” that include media formats, signaling a more interactive and immersive search experience ahead.

Search is a ‘big one’ for Reddit. Huffman said the company is testing new app layouts that give search prominent placement, including versions with a large, always-visible search bar at the top of the home screen.

  • COO Jennifer Wong said search and Answers represent a major opportunity, even though monetization remains early on some surfaces.
  • Wong described Reddit search behavior as “incremental and additive” to existing engagement and often tied to high-intent moments, such as researching purchases or comparing options.

AI answers make Reddit more important. Huffman also linked Reddit’s search push to its partnerships with Google and OpenAI. He said Reddit content is now the most-cited source in AI-generated answers, highlighting the platform’s growing influence on how people find information.

  • Reddit sees AI summaries as an opportunity — to move users from AI answers into Reddit communities, where they can read discussions, ask follow-up questions, and participate.
  • If someone asks “what the best speaker is,” he said, Reddit wants users to discover not just a summary, but the community where real people are actively debating the topic.

Reddit earnings. Reddit Reports Fourth Quarter and Full Year 2025 Results; Announces $1 Billion Share Repurchase Program

OpenAI starts testing ChatGPT ads

OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.

The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.

  • OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
  • Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.

How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.

  • For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.

User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.

  • Turning personalization off limits ads to the current chat.
  • Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.

Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.

Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.

OpenAI’s announcement.Testing ads in ChatGPT (OpenAI)

Google AI Mode doesn’t favor above-the-fold content: Study

AI Mode depth doesn't matter

Google’s AI Mode isn’t more likely to cite content that appears “above the fold,” according to a study from SALT.agency, a technical SEO and content agency.

  • After analyzing more than 2,000 URLs cited in AI Mode responses, researchers found no correlation between how high text appears on a page and whether Google’s AI selects it for citation.

Pixel depth doesn’t matter. AI Mode cited text from across entire pages, including content buried thousands of pixels down.

  • Citation depth showed no meaningful relationship to visibility.
  • Average depth varied by vertical, from about 2,400 pixels in travel to 4,600 pixels in SaaS, with many citations far below the traditional “above the fold” area.

Page layout affects depth, not visibility. Templates and design choices influenced how far down the cited text appeared, but not whether it was cited.

  • Pages with large hero images or narrative layouts pushed cited text deeper, while simpler blog or FAQ-style pages surfaced citations earlier.
  • No layout type showed a visibility advantage in AI Mode.

Descriptive subheadings matter. One consistent pattern emerged: AI Mode frequently highlighted a subheading and the sentence that followed it.

  • This suggests Google uses heading structures to navigate content, then samples opening lines to assess relevance, behavior consistent with long-standing search practices, according to SALT.

What Google is likely doing. SALT believes AI Mode relies on the same fragment indexing technology Google has used for years. Pages are broken into sections, and the most relevant fragment is retrieved regardless of where it appears on the page.

What they’re saying. While the study examined only one structural factor and one AI model, the takeaway is clear: there’s no magic formula for AI Mode visibility. Dan Taylor, partner and head of innovation (organic and AI) at SALT.agency, said:

  • “Our study confirms that there is no magic template or formula for increased visibility in AI Mode responses – and that AI Mode is not more likely to cite text from ‘above the fold.’ Instead, the best approach mirrors what’s worked in search for years: create well-structured, authoritative content that genuinely addresses the needs of your ideal customers.
  • “…the data clearly debunks the idea that where the information sits within a page has an impact on whether it will be cited.”

Why we care. The findings challenge the idea that AI-specific templates or rigid page structures drive better AI Mode visibility. Chasing “AI-optimized” layouts may distract from work that actually matters.

About the research. SALT analyzed 2,318 unique URLs cited in AI Mode responses for high-value queries across travel, ecommerce, and SaaS. Using a Chrome bookmarklet and a 1920×1080 viewport, researchers recorded the vertical pixel position of the first highlighted character in each AI-cited fragment. They also cataloged layouts and elements, such as hero sections, FAQs, accordions, and tables of contents.

The study. Research: Does Structuring Your Content Improve the Chances of AI Mode Surfacing?

A preview of ChatGPT’s ad controls just surfaced

OpenAI ChatGPT ad platform

A newly discovered settings panel offers a first detailed look at how ads could work inside ChatGPT, including how personalization and privacy controls are designed.

Driving the news. Entrepreneur Juozas Kaziukėnas found a way to trigger ChatGPT’s upcoming ad settings interface. The panel repeatedly stresses that advertisers won’t see user chats, history, memories, personal details, or IP addresses.

What the settings reveal. The interface lays out a structured ad system with dedicated controls:

  • A history tab logs ads users have seen in ChatGPT.
  • An interests tab stores inferred preferences based on ad interactions and feedback.
  • Each ad includes options to hide or report it.
  • Users can delete ad history and interests separately from their general ChatGPT data.

Personalization options. Users can turn ad personalization on or off. When it’s on, ChatGPT uses saved ad history and interest signals to tailor ads. When it’s off, ads still appear but rely only on the current conversation for context.

  • There’s also an option to personalize ads using past conversations and memory features, though the interface stresses that chat content isn’t shared with advertisers. Accounts with memory disabled won’t see this option active.

Why we care. This settings panel offers the clearest view yet of how ad personalization and privacy controls could work with ChatGPT ads. It points to a system built on strict privacy boundaries. The controls suggest ads will rely on contextual signals and opt-in personalization, not deep user tracking. That shift makes creative relevance and in-conversation intent more important than traditional audience profiling for brands preparing for conversational ad environments.

The bigger picture. The discovery suggests OpenAI is building an ad system that mirrors familiar controls from major ad platforms while prioritizing clear privacy boundaries and user choice.

Bottom line. ChatGPT ads aren’t live yet, but the framework is coming into focus — pointing to a future where conversational ads come with granular privacy and personalization controls.

First seen. Kaziukėnas shared the preview of the platform on LinkedIn.

What Google and Microsoft patents teach us about GEO

https://searchengineland.com/wp-admin/post.php?post=468436&action=edit

Generative engine optimization (GEO) represents a shift from optimizing for keyword-based ranking systems to optimizing for how generative search engines interpret and assemble information. 

While the inner workings of generative AI are famously complex, patents and research papers filed by major tech companies such as Google and Microsoft provide concrete insight into the technical mechanisms underlying generative search. By analyzing these primary sources, we can move beyond speculation and into strategic action.

This article analyzes the most insightful patents to provide actionable lessons for three core pillars of GEO: query fan-out, large language model (LLM) readability, and brand context.

Why researching patents is so important for learning GEO

Patents and research papers are primary, evidence-based sources that reveal how AI search systems actually work. The knowledge gained from these sources can be used to draw concrete conclusions about how to optimize these systems. This is essential in the early stages of a new discipline such as GEO.

Patents and research papers reveal technical mechanisms and design intent. They often describe retrieval architectures, such as: 

  • Passage retrieval and ranking.
  • Retrieval-augmented generation (RAG) workflows.
  • Query processing, including query fan-out, grounding, and other components that determine which content passages LLM-based systems retrieve and cite. 

Knowing these mechanisms explains why LLM readability, chunk relevance, and brand and context signals matter.

Primary sources reduce reliance on hype and checklists. Secondary sources, such as blogs and lists, can be misleading. Patents and research papers let you verify claims and separate evidence-based tactics from marketing-driven advice.

Patents enable hypothesis-driven optimization. Understanding the technical details helps you form testable hypotheses, such as how content structure, chunking, or metadata might affect retrieval, ranking, and citation, and design small-scale experiments to validate them.

In short, patents and research papers provide the technical grounding needed to:

  • Understand why specific GEO tactics might work.
  • Test and systematize those tactics.
  • Avoid wasting effort on unproven advice.

This makes them a central resource for learning and practicing generative engine optimization and SEO. 

That’s why I’ve been researching patents for more than 10 years and founded the SEO Research Suite, the first database for GEO- and SEO-related patents and research papers.

How do you learn GEO
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why we need to differentiate when talking about GEO

In many discussions about generative engine optimization, too little distinction is made between the different goals that GEO can pursue.

One goal is improving the citability of LLMs so your content is cited more often as the source. I refer to this as LLM readability optimization.

Another goal is brand positioning for LLMs, so a brand is mentioned more often by name. I refer to this as brand context optimization.

Each of these goals relies on different optimization strategies. That’s why they must be considered separately.

Differentiating GEO

The three foundational pillars of GEO

Understanding the following three concepts is strategically critical. 

These pillars represent fundamental shifts in how machines interpret queries, process content, and understand brands, forming the foundation for advanced GEO strategies. 

They are the new rules of digital information retrieval.

LLM readability: Crafting content for AI consumption

LLM readability is the practice of optimizing content so it can be effectively processed, deconstructed, and synthesized by LLMs. 

It goes beyond human readability and includes technical factors such as: 

  • Natural language quality.
  • Logical document structure.
  • A clear information hierarchy.
  • The relevance of individual text passages, often referred to as chunks or nuggets.

Brand context: Building a cohesive digital identity

Brand context optimization moves beyond page-level optimization to focus on how AI systems synthesize information across an entire web domain. 

The goal is to build a holistic, unified characterization of a brand. This involves ensuring your overall digital presence tells a consistent and coherent story that an AI system can easily interpret.

Query fan-out: Deconstructing user intent

Query fan-out is the process by which a generative engine deconstructs a user’s initial, often ambiguous query into multiple specific subqueries, themes, or intents. 

This allows the system to gather a more comprehensive and relevant set of information from its index before synthesizing a final generated answer.

These three pillars are not theoretical. They are actively being built into the architecture of modern search, as the following patents and research papers reveal.

Patent deep dive: How generative engines understand user queries (query fan-out)

Before a generative engine can answer a question, it must first develop a clear understanding of the user’s true intent. 

The patents below describe a multi-step process designed to deconstruct ambiguity, explore topics comprehensively, and ensure the final answer aligns with a confirmed user goal rather than the initial keywords alone.

Microsoft’s ‘Deep search using large language models’: From ambiguous query to primary intent

Microsoft’s “Deep search using large language models” patent (US20250321968A1) outlines a system that prioritizes intent by confirming a user’s true goal before delivering highly relevant results. 

Instead of treating an ambiguous query as a single event, the system transforms it into a structured investigation.

The process unfolds across several key stages:

  • Initial query and grounding: The system performs a standard web search using the original query to gather context and a set of grounding results.
  • Intent generation: A first LLM analyzes the query and the grounding results to generate multiple likely intents. For a query such as “how do points systems work in Japan,” the system might generate distinct intents like “immigration points system,” “loyalty points system,” or “traffic points system.”
  • Primary intent selection: The system selects the most probable intent. This can happen automatically, by presenting options to the user for disambiguation, or by using personalization signals such as search history.
  • Alternative query generation: Once a primary intent is confirmed, a second LLM generates more specific alternative queries to explore the topic in depth. For an academic grading intent, this might include queries like “German university grading scale explained.”
  • LLM-based scoring: A final LLM scores each new search result for relevance against the primary intent rather than the original ambiguous query. This ensures only results that precisely match the confirmed goal are ranked highly.

The key insight from this patent is that search is evolving into a system that resolves ambiguity first. 

Final results are tailored to a user’s specific, confirmed goal, representing a fundamental departure from traditional keyword-based ranking.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s “thematic search” patent (US12158907B1) provides the architectural blueprint for features such as AI Overviews. The system is designed to automatically identify and organize the most important subtopics related to a query. 

It analyzes top-ranked documents, uses an LLM to generate short summary descriptions of individual passages, and then clusters those summaries to identify common themes.

The direct implication is a shift from a simple list of links to a guided exploration of a topic’s most important facets. 

This process organizes information for users and allows the engine to identify which themes consistently appear across top-ranking documents, forming a foundational layer for establishing topical consensus.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s ‘stateful chat’: Generating queries from conversation history

The concept of synthetic queries in Google’s “Search with stateful chat” patent (US20240289407A1) reveals another layer of intent understanding. 

The system generates new, relevant queries based on a user’s entire session history rather than just the most recent input. 

By maintaining a stateful memory of the conversation, the engine can predict logical next steps and suggest follow-up queries that build on previous interactions.

The key takeaway is that queries are no longer isolated events. Instead, they’re becoming part of a continuous, context-aware dialogue. 

This evolution requires content to do more than answer a single question. It must also fit logically within a broader user journey.

Google’s ‘stateful chat’: Generating queries from conversation history

Patent deep dive: Crafting content for AI processing (LLM readability)

Once a generative engine has disambiguated user intent and fanned out the query, its next challenge is to find and evaluate content chunks that can precisely answer those subqueries. This is where machine readability becomes critical. 

The following patents and research papers show how engines evaluate content at a granular, passage-by-passage level, rewarding clarity, structure, and factual density.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

The GINGER research paper introduces a methodology for improving the factual accuracy of AI-generated responses. Its core concept involves breaking retrieved text passages into minimal, verifiable information units, referred to as nuggets.

By deconstructing complex information into atomic facts, the system can more easily trace each statement back to its source, ensuring every component of the final answer is grounded and verifiable.

The lesson from this approach is clear: Content should be structured as a collection of self-contained, fact-dense nuggets. 

Each paragraph or statement should focus on a single, provable idea, making it easier for an AI system to extract, verify, and accurately attribute that information.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

Google’s span selection: Pinpointing the exact answer

Google’s “Selecting answer spans” patent (US11481646B2) describes a system that uses a multilevel neural network to identify and score specific text spans, or chunks, within a document that best answer a given question. 

The system evaluates candidate spans, computes numeric representations based on their relationship to the query, and assigns a final score to select the single most relevant passage.

The key insight is that the relevance of individual paragraphs is evaluated with intense scrutiny. This underscores the importance of content structure, particularly placing a direct, concise answer immediately after a question-style heading. 

The patent provides the technical justification for the answer-first model, a core principle of modern GEO strategy.

Google's span selection: Pinpointing the exact answer

The consensus engine: Validating answers with weighted terms

Google’s “Weighted answer terms” patent (US10019513B1) explains how search engines establish a consensus around what constitutes a correct answer.

This patent is closely associated with featured snippets, but the technology Google developed for featured snippets is one of the foundational methodologies behind passage-based retrieval used today by AI search systems to select passages for answers.

The system identifies common question phrases across the web, analyzes the text passages that follow them, and creates a weighted term vector based on terms that appear most frequently in high-quality responses. 

For a query such as “Why is the sky blue?” terms like “Rayleigh scattering” and “atmosphere” receive high weights.

The key lesson is that to be considered an accurate and authoritative source, content must incorporate the consensus terminology used by other expert sources on the topic. 

Deviating too far from this established vocabulary can cause content to be scored poorly for accuracy, even when it is factually correct.

Get the newsletter search marketers rely on.


Patent deep dive: Building your brand’s digital DNA (brand context)

While earlier patents focus on the micro level of queries and content chunks, this final piece operates at the macro level. The engine must understand not only what is being said but also who is saying it. 

This is the essence of brand context, representing a shift from optimizing individual pages to projecting a coherent brand identity across an entire domain. 

The following patent shows how AI systems are designed to interpret an entity by synthesizing information from across its full digital presence.

Google’s entity characterization: The website as a single prompt

The methodology described in Google’s “Data extraction using LLMs” patent (WO2025063948A1) outlines a system that treats an entire website as a single input to an LLM. The system scans and interprets content from multiple pages across a domain to generate a single, synthesized characterization of the entity. 

This is not a copy-and-paste summary but a new interpretation of the collected information that is better suited to an intended purpose, such as an ad or summary, while still passing quality checks that verbatim text might fail.

The patent also explains that this characterization is organized into a hierarchical graph structure with parent and leaf nodes, which has direct implications for site architecture:

Patent conceptCorresponding GEO strategy
Parent Nodes (Broad attributes like “Services”)Create broad, high-level “hub” pages for core business categories (e.g., /services/).
Leaf Nodes (Specific details like “Pricing”)Develop specific, granular “spoke” pages for detailed offerings (e.g., /services/emergency-plumbing/).

The key implication is that every page on a website contributes to a single brand narrative.

Inconsistent messaging, conflicting terminology, or unclear value propositions can cause an AI system to generate a fragmented and weak entity characterization, reducing a brand’s authority in the system’s interpretation.

Google’s entity characterization: The website as a single prompt

The GEO playbook: Actionable lessons derived from the patents

These technical documents aren’t merely theoretical. They provide a clear, actionable playbook for aligning content and digital strategy with the core mechanics of generative search. The principles revealed in these patents form a direct guide for implementation.

Principle 1: Optimize for disambiguated intent, not just keywords

Based on the “Deep Search” and “Thematic Search” patents, the focus must shift from targeting single keywords to comprehensively answering the specific, disambiguated intents a user may have.

Actionable advice 

  • For a target query, brainstorm the different possible user intents. 
  • Create distinct, highly detailed content sections or separate pages for each one, using clear, question-based headings to signal the specific intent being addressed.

Principle 2: Structure for machine readability and extraction

Synthesizing lessons from the GINGER paper, the “answer spans” patent, and LLM readability guidance, it’s clear that structure is critical for AI processing.

Actionable advice

Apply the following structural rules to your content:

  • Use the answer-first model: Structure content so the direct answer appears immediately after a question-style heading. Follow with explanation, evidence, and context.
  • Write in nuggets: Compose short, self-contained paragraphs, each focused on a single, verifiable idea. This makes each fact easier to extract and attribute.
  • Leverage structured formats: Use lists and tables whenever possible. These formats make data points and comparisons explicit and easily parsable for an LLM.
  • Employ a logical heading hierarchy: Use H1, H2, and H3 tags to create a clear topical map of the document. This hierarchy helps an AI system understand the context and scope of each section.

Principle 3: Build a unified and consistent entity narrative

Drawing directly from the “Data extraction using LLMs” patent, domainwide consistency is no longer a nice-to-have. It’s a technical requirement for building a strong brand context.

Actionable advice

  • Conduct a comprehensive content audit. 
  • Ensure mission statements, service descriptions, value propositions, and key terminology are used consistently across every page, from the homepage to blog posts to the site footer.

Principle 4: Speak the language of authoritative consensus

The “Weighted answer terms” patent shows that AI systems validate answers by comparing them against an established consensus vocabulary.

Actionable advice

  • Before writing, analyze current featured snippets, AI Overviews, and top-ranking documents for a given query. 
  • Identify recurring technical terms, specific nouns, and phrases they use. 
  • Incorporate this consensus vocabulary to signal accuracy and authority.

Principle 5: Mirror the machine’s hierarchy in your architecture

The parent-leaf node structure described in the entity characterization patent provides a direct blueprint for effective site architecture.

Actionable advice

  • Design site architecture and internal linking to reflect a logical hierarchy. Broad parent category pages should link to specific leaf detail pages. 
  • This structure makes it easier for an LLM to map brand expertise and build an accurate hierarchical graph.

These five principles aren’t isolated tactics. 

They form a single, integrated strategy in which site architecture reinforces the brand narrative, content structure enables machine extraction, and both align to answer a user’s true, disambiguated intent.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Aligning with the future of information retrieval

Patents and research papers from the world’s leading technology companies offer a clear view of the future of search. 

Generative engine optimization is fundamentally about making information machine-interpretable at two critical levels: 

  • The micro level of the individual fact, or chunk.
  • The macro level of the cohesive brand entity. 

By studying these documents, you can shift from a reactive approach of chasing algorithm updates to a proactive one of building digital assets aligned with the core principles of how generative AI understands, structures, and presents information.

Why GA4 alone can’t measure the real impact of AI SEO

Why GA4 alone can’t measure the real impact of AI SEO

If you’re relying on GA4 alone to measure the impact of AI SEO, you’re navigating with a broken compass.

Don’t misunderstand me. It’s a reasonable launch pad. But to understand how audiences discover, evaluate, and ultimately choose brands, measurement must move beyond the bounds of Google’s tooling.

SEO is a journey, not a destination. If you optimize only for attributable visits, large parts of that journey disappear from view.

Sessions are an outcome. They can’t contextualize consideration sets increasingly shaped by algorithms and AI well before a visit ever happens.

Don’t lose potential customers in the Bermuda Triangle of traditional SEO measurement. Harness the power of share of voice to steer user intent. Guide them to you by mapping your brand visibility in AI analytics.

Measuring AI visits with GA4

Links are becoming more prevalent in AI systems. Traffic is climbing. GA4 makes it easy to set up a custom report to track these sessions.

Create an exploration with “session source / medium” as the dimension and “sessions” as the metric. Then apply this regex filter on the referrer:

.*(chatgpt|openai|claude|gemini|bard|copilot|perplexity|you\.com|meta\.ai|grok|huggingface|deepseek|mistral|manus|alexaplus|edgeservices|poe).*
Measuring AI visits with GA4

Don’t be concerned if the output report is messy. That’s normal. Many AI systems send multiple sets of partial referral information. Some send none at all, so sessions appear as dark traffic.

This report is an easy first step. But don’t be fooled into thinking it can measure the impact of AI on your brand on its own.

The most viewed AI outputs – Google’s AI Overviews and AI Mode – can’t be seen here. They are attributed to either “google / organic” or “(direct) / (none),” depending on how the user accessed Google.

With these limitations, looking only at GA4 traffic from generative AI is not a holistic enough data source to understand the reality of usage by your target audience and the impact on your brand.

Other data sources are needed.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Bing Webmaster Tools technically reports Copilot data. But in the most Microsoftesque fashion, chat is combined with web metrics, obscuring the chat data and making the report ineffective for understanding the impact of generative AI.

This approach laid the foundation for Google Search Console to do the same. AI Overviews and AI Mode impressions and clicks are lumped in with Search, and the Gemini app is not included at all.

What you can do is look for more conversational-style queries using a Google Search Console regex, such as:

^(who|what|whats|when|where|wheres|why|how|which|should)\b|.*\b(benefits of|difference between|advantages|disadvantages|examples of|meaning of|guide to|vs|versus|compare|comparison|alternative|alternatives|types of|ways to|tips|pros|cons|worth it|best|top)\b.*

But this is becoming less valuable as query fan-out becomes the standard, making synthetic queries indistinguishable from human queries while inflating impression numbers.

Worse, both GSC and BWT will become increasingly myopic as websites are bypassed by MCP connections or accessed directly by AI agents.

Again, other data sources are needed.

Get the newsletter search marketers rely on.


AI agent analytics with log files

Both Google and ChatGPT offer AI agents that can browse and, with permission, convert on a human’s behalf.

When an AI agent uses a text-based browser, it can’t be tracked by cookie-based analytics.

If the agent switches to a visual browser, it often accepts cookies, 78% of the time in my testing. But this creates problems in GA4:

  • Odd engagement metrics. These are agent behaviors, not human ones.
  • An unnatural resurgence of desktop traffic. Agents use desktop browsers exclusively.
  • An uptick in Chrome. Agents run on Chromium.

On the plus side, agentic conversions are recorded, but they are attributed to direct traffic.

As a result, many SEOs are turning to bot logs, where AI agent requests can be identified. But those requests are not a headcount of humans sending agents to complete tasks.

AI agents - bot logs

When an agent renders a page in a visual browser, it fires multiple requests for every asset. CSS. JS. Images. Fonts. A bloated front end equals inflated request counts, making raw volume a vanity metric.

The insight lies not in totals, but in paths.

Most popular paths by crawler

Follow the request flow through the site to the conversion success page. If there are plenty of requests but none reach the conversion path, you know the journey is broken.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Dig deeper: How to segment traffic from LLMs in GA4

Traditional SEO reporting isn’t up to the task of tracking AI

To track the impact of AI SEO, you need to reassess your reporting. 

Its benefits extend beyond the bounds of GA4, Google Search Console, and log file analysis, all of which assume the user reached your website directly from an AI surface. That’s not required for brand value.

Many SEO tools are now adding AI tracking, or are built entirely around it. The methodology is imperfect, with chat outcomes that are probabilistic, not deterministic. It’s similar to running focus groups.

With an unbiased sample, unbiased prompts, and regular testing, the resulting trends are valuable, even if any individual response is not. They reveal the set of brands an AI system associates with a given intent, forming a consensus view of a credible consideration set.

But AI search analytics tools are not all created equal.

Make sure your tool tracks not only website citations, but also in-chat brand mentions and citations of brand assets such as social media profiles, videos, map listings, and apps.

These are no less valuable than a website link. Recognizing this reflects SEO’s growth.

As an industry, we are returning to meaningful marketing KPIs like share of voice. Understanding brand visibility for relevant intents is what ultimately drives market share.

It’s not SEO’s job to optimize a website. It’s to build a well-known, top-rated, and trusted digital brand. That is the foundation of visibility across every organic surface.

How to diagnose and fix the biggest blocker to PPC growth

Why PPC optimization fails to scale and how to find the real constraint

We’ve all been there. A client wants to scale their Google Ads account from €10,000 per month to €100,000. So, you do what any good PPC manager would do:

  • Refine your bidding strategy.
  • Test new ad copy variations.
  • Expand your keyword portfolio.
  • Optimize landing pages.
  • Improve Quality Scores.
  • Launch Performance Max campaigns.

Three months later, you’ve increased ad spend by 15%. The client is… fine with it. But you know you should be doing better.

Here’s the uncomfortable truth: Most pay-per-click (PPC) optimization work is sophisticated procrastination.

What the theory of constraints teaches us about PPC

The theory of constraints, developed by Eliyahu Goldratt for manufacturing systems, reveals something counterintuitive. Every system is limited by exactly one bottleneck at any given time.

Making your marketing team twice as efficient won’t help if production capacity is the constraint. Similarly, improving your ad copy click-through rate (CTR) by 20% won’t move the needle if your real constraint is budget approval or landing page conversion rate.

The theory demands radical focus. Identify the single weakest link and treat everything else as less important.

Applied to PPC, this means: Stop optimizing everything. Find your number one constraint. Fix only that and then move on.




7 constraints that prevent PPC scaling

In my years of managing PPC accounts, I’ve found that almost every scaling challenge falls into one of seven categories:

1. Budget

Signal: You could profitably spend more, but you’re capped by client approval.

Example: Your campaigns are profitable at €10,000 per month with room to spend €50,000, but your client won’t approve the additional budget. Sometimes it’s risk aversion, but other times it’s a cash flow issue. 

The fix: Build a business case demonstrating profitability with a higher spend. Show historical return on ad spend (ROAS), competitive benchmarks, and projected returns.

What to ignore: Avoid ad copy testing, keyword expansion, bidding optimization, and new campaigns. None of this matters if you can’t spend more money anyway.

Dig deeper: PPC campaign budgeting and bidding strategies

2. Impression share

Signal: You’re already capturing 90%+ impression share and can’t buy more traffic.

Example: You’re targeting a niche B2B market with only 1,000 relevant searches per month.

The fix: Expand to related keywords or use broader match types. Alternatively, enter new geographic markets or add complementary platforms like Microsoft Ads or LinkedIn Ads.

What to ignore: Don’t worry about bidding optimization, since you’re already buying almost all available impressions.

3. Creative

Signal: You have high impression share but low CTRs, resulting in a premium cost per click (CPC).

Example: You’re showing ads on 80% of searches, but CTR is 2% when the industry average is 5%.

The fix: Aggressively test ad copy, better message-market fit, and more compelling.

What to ignore: Avoid keyword expansion. Your ads are already visible, they just aren’t getting clicks.

4. Conversion rate

Signal: You’re generating strong traffic volume and acceptable CPC, but terrible conversion rates.

Example: You’re getting 10,000 clicks per month. But you have a 1% conversion rate when you should be getting 5%+.

The fix: Optimize landing pages, improve offers, and refine sales funnels.

What to ignore: Don’t launch more traffic campaigns. You’re already wasting the traffic you have.

5. Fulfillment

Signal: Your campaigns could generate more leads. But the client’s sales or operations team can’t handle more.

Example: You’re generating 500 leads per month, but sales can only process 100.

The fix: This is a client operations issue, not a PPC issue. Help them identify it, but know that the solution lies outside your control. Do more business consulting for your client while maintaining the current PPC level.

What to ignore: Pause all PPC optimization, as the system can’t absorb more volume.

6. Profitability

Signal: You can scale volume, but cost per acquisition (CPA) is too high to be profitable.

Example: You need €50 CPA to break even, but you’re currently at €80 CPA.

The fix: Improve unit economics through better targeting or creative optimization. Alternatively, help the client rethink their pricing or improve customer lifetime value (LTV).

What to ignore: Set aside volume tactics until the economics work at the current scale.

7. Tracking or attribution

Signal: Attribution is broken, so you can’t confidently scale the campaign.

Example: You’re seeing complex multi-touch customer journeys where you can’t definitively prove PPC’s contribution.

The fix: Implement better tracking and test different tracking stacks (e.g., server-side, fingerprinting, or cookie-based). You can also update your attribution modeling or develop first-party data capabilities.

What to ignore: Avoid scaling any channel until you know what actually drives results.

Dig deeper: How to track and measure PPC campaigns

Get the newsletter search marketers rely on.


The diagnostic framework

Identifying your constraint requires methodical analysis rather than gut feeling. Here’s how to uncover what’s holding your account back.

Run an audit

Start by benchmarking critical metrics:

  • Impression share: If you’re capturing less than 50% of available impressions, your constraint is likely budget or bids preventing you from competing effectively.
  • CTR: Performance below industry benchmarks signals a creative constraint where your messaging isn’t resonating with searchers.
  • CPC: Unusually high CPCs often indicate a Quality Score constraint, which reflects poor ad relevance or landing page experience.
  • Conversion rate: If this metric lags compared to historical performance or industry standards, your constraint is the landing page.
  • Search volume: If you’ve already captured the majority of relevant searches, your constraint is inventory exhaustion.

Don’t overlook operational metrics either. Check fulfillment capacity by determining how many leads your client’s team can handle per month.

Finally, document your approved budget against what you could profitably spend. If there’s a sizable difference, budget approval is your primary constraint.

Ask the critical question

With your audit complete, resist the temptation to create a prioritized list. Instead, force yourself to answer one question: “If I could only fix one of these metrics, which would unlock 10x growth?”

That single metric is your constraint. Everything else, regardless of how suboptimal it appears, is secondary until you’ve broken through this bottleneck.

Apply radical focus

Once you’ve identified your primary constraint, it’s time to change your entire approach. This is where marketers tend to fail. They acknowledge the constraint but continue hedging their efforts across multiple fronts.

Why constraints are dynamic (and why that’s good)

Understanding constraint theory means recognizing that bottlenecks shift as you scale.

Consider a typical scaling journey. In month one, you’re stuck at a €10,000 monthly budget despite profitable performance metrics.

Your constraint is budget, so leadership won’t approve more ad spend. You build the business case, secure approval, and immediately scale to €30,000 monthly spend.

Success, right? Not quite. You’ve just revealed the next constraint.

By month two, you’re capturing 95% of core keyword inventory. Your new constraint is impression share, as you’ve exhausted available traffic in your primary audience.

The fix is to expand to related terms and broader match types to bring new searchers into your funnel. This expansion takes you to €50,000 per month.

Month three presents a new challenge. Your expanded traffic converts at 2% while your original core traffic maintains 5% conversion rates. Your constraint has shifted to conversion rate.

The broader audience needs different messaging or a modified landing page experience. So, you focus exclusively on improving the post-click experience until conversion rate recovers to 4%. This lets you scale to €80,000 per month.

By month four, your sales team is drowning in 500 leads per month, which more than they can effectively manage. Your constraint shifts from the PPC account to fulfillment capacity. The client hires additional sales staff to handle volume, and you scale to €120,000 per month.

Each new constraint is proof you’ve graduated to the next level. Many accounts never experience the problem of fulfillment constraints because they never break through the earlier barriers of budget and inventory.

Common traps to avoid when scaling PPC

The ‘optimize everything’ approach

When you try to optimize everything, you might spend:

  • 10 hours optimizing ad copy (+0.2% CTR)
  • 10 hours improving landing page (+0.5% CVR)
  • 10 hours refining bid strategy (+3% efficiency)

After investing 30 hours, you only achieve 5% account growth.

Instead, identify the primary constraint (e.g., conversion rate).Then, invest all 30 hours in landing page optimization. Continue to monitor your conversion rate.

Shiny object syndrome

Say your budget is capped by the client at €10,000 by client. But you spend 20 hours testing Performance Max because it’s new and interesting.

After running those tests, you achieve zero scale. And your budget is still capped at €10,000.

Instead, recognize that your primary constraint is budget approval. Build a business case, secure approval, and start scaling immediately.

Analysis paralysis

If you wait for perfect Google Analytics 4 tracking before scaling,  competitors may move forward with good enough attribution.

This can mean losing six months with no scale.

Aim for 80% accurate tracking. Perfect attribution is rarely the actual constraint.

How to implement the theory of constraints in your agency or in-house team

For your next client strategy call

Don’t say: “We’ll optimize your campaigns across multiple dimensions, bidding, creative, targeting, and see what drives the best results.”

Instead, say this: “Before we optimize anything, I need to diagnose your constraint. Once I identify it, I’ll focus exclusively on fixing that bottleneck while maintaining everything else. When it’s resolved, we’ll tackle the next constraint. This is how we’ll reach your goals.”

For your team

Implement a Constraint Monday ritual. Every Monday, each account manager identifies the primary constraint for their top three accounts. The team focuses the week’s efforts on moving those specific constraints.

On Friday, review the results. Did the constraint move?

  • If yes, what’s the new constraint?
  • If not, you had the wrong diagnosis. Try again.

From tactical to strategic PPC scaling

The difference between a good PPC manager and a great one isn’t technical skill. Instead, it’s the ability to identify constraints.

Good PPC managers optimize everything and achieve incremental gains. Great PPC managers identify the one thing preventing scale and fix only that, achieving exponential gains.

When you master the theory of constraints, you stop being seen as a tactical campaign manager and start being recognized as a strategic growth partner.

You’re no longer reporting on CTR improvements and Quality Score gains. You’re diagnosing business constraints and unlocking growth that seemed impossible.

That’s the shift that transforms PPC careers and accounts.

Amanda Farley talks broken pixels and calm leadership

On episode 340 of PPC Live The Podcast, I speak to Amanda Farley, CMO of Aimclear and a multi-award-winning marketing leader, brings a mix of honesty and expertise to the PPC Live conversation. A self-described T-shaped marketer, she combines deep PPC knowledge with broad experience across social, programmatic, PR, and integrated strategy. Her journey — from owning an gallery and tattoo studio to leading award-winning global campaigns — reflects a career built on curiosity, resilience, and continuous learning.

Overcoming limiting beliefs and embracing creativity

Amanda once ran an gallery and tattoo parlor while believing she wasn’t an artist herself. Surrounded by creatives, she eventually realized her only barrier was a limiting belief. After embracing painting, she created hundreds of artworks and discovered a powerful outlet for expression.

This mindset shift mirrors marketing growth. Success isn’t just technical — it’s mental. By challenging internal doubts, marketers can unlock new skills and opportunities.

When campaign infrastructure breaks: A high-stakes lesson

Amanda recalls a global campaign where tracking infrastructure failed across every channel mid-flight. Pixels broke, data vanished, and campaigns were running blind. Multiple siloed teams and a third-party vendor slowed resolution while budgets continued to spend.

Instead of assigning blame, Amanda focused on collaboration. Her team helped rebuild tracking and uncovered deeper data architecture issues. The crisis led to stronger onboarding processes, earlier validation checks, and clearer expectations around data hygiene. In modern PPC, clean infrastructure is essential for machine learning success.

The hidden importance of PPC hygiene

Many account audits reveal the same problem: neglected fundamentals. Basic settings errors and poorly maintained audience data often hurt performance before strategy even begins.

Outdated lists and disconnected data systems weaken automation. In an machine-learning environment, strong data hygiene ensures campaigns have the quality signals they need to perform.

Why integrated marketing is no longer optional

Amanda’s background in psychology and SEO shaped her integrated approach. PPC touches landing pages, user experience, and sales processes. When conversions drop, the issue may lie outside the ad account.

Understanding the full customer journey allows marketers to diagnose problems holistically. For Amanda, integration is a practical necessity, not a buzzword.

AI, automation, and the human factor

While AI dominates industry conversations, Amanda stresses balance. Some tools are promising, but not all are ready for full deployment. Testing is essential, but human oversight remains critical.

Machines optimize patterns, but humans judge emotion, messaging, and brand fit. Marketers who study changing customer journeys can also find new opportunities to intercept audiences across channels.

Building a culture that welcomes mistakes

Amanda believes leaders act as emotional barometers. Calm investigation beats reactive blame when issues arise. Many PPC problems stem from external changes, not individual failure.

By acknowledging stress and focusing on solutions, leaders create psychological safety. This environment encourages experimentation and turns mistakes into learning opportunities.

Testing without fear in an changing landscape

Marketing is entering another experimental era with no clear rulebook. Amanda encourages teams to dedicate budget to testing and lean on professional communities for insight.

Not every experiment will succeed, but each provides data that informs smarter future decisions.

The tasmanian devil who practices yoga

Amanda describes her career as If the Tasmanian Devil Could Do Yoga — a blend of fast-paced chaos and intentional calm. It reflects modern marketing: demanding, unpredictable, and balanced by thoughtful leadership.

💾

Amanda Farley shares lessons on overcoming setbacks and balancing AI with human insight in modern marketing leadership.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About Us At Ideal Living, we believe everyone has a right to pure water, clean air, and a solid foundation for wellness. As the parent company of leading wellness brands AirDoctor and AquaTru, we help bring this mission to life daily through our award-winning, innovative, science-backed products. For over 25 years, Los Angeles-based Ideal Living […]
  • About US: Abacus Business Computer (abcPOS) is a New York City-based technology company specializing in comprehensive point-of-sale (POS) systems and integrated payment solutions. With over 30 years of industry expertise, abcPOS offers an all-in-one platform that combines POS systems, merchant services, and growth-focused marketing tools. Serving more than 6,000 businesses and supporting over 40,000 devices, […]
  • Responsibilities: Execute full on-page SEO optimization (titles, meta, internal linking, structure) Deliver Local SEO improvements (Google Business Profile optimization, citations) Perform technical SEO audits and implement clear action plans Conduct keyword research for competitive local markets Build and manage SEO content plans focused on ranking and leads Provide monthly reporting with measurable ranking + traffic […]
  • Job/Role Overview: We’re hiring a modern digital marketer who understands that today’s marketing is AI-assisted, data-driven, and constantly evolving. This role is ideal for a recent college graduate or early-career professional trained in today’s digital and AI-focused programs – not outdated marketing playbooks. If you actively use AI tools, enjoy testing ideas, and think in […]
  • Job Description Job Title: Graphic Design & Digital Marketing Specialist Location: Hybrid / Remote (Huntersville, NC preferred) Employment Type: Full Time About Everblue Everblue is a mission-driven company dedicated to transforming careers and improving organizational efficiency. We provide training, certifications, and technology-driven solutions for contractors, government agencies, and nonprofits. Our work modernizes outdated processes, enhances […]
  • 📌 Job Title: On-Page SEO Specialist 📅 Experience: 5+ Years ⏰ Schedule: 8 AM – 5 PM CST 💰 Compensation: $10-$15/hour (based on experience) 🏡 Fully Remote | Full-time Contract Position 🌟 Job Overview We’re looking for a seasoned On-Page SEO Specialist to optimize and enhance our website’s on-page SEO performance while driving multi-location performance […]
  • Job Description MID AMERICA GOLF AND MID AMERICA SPORTS CONSTRUCTION is a leading provider of Golf and Sports construction services and synthetic turf installations, specializing in high-quality residential and commercial projects. We pride ourselves on transforming spaces with durable, eco-friendly solutions that enhance aesthetics and functionality. We’re seeking a dynamic marketing professional to elevate our […]
  • About Us Would you like to be part of a fast-growing team that believes no one should have to succumb to viral-mediated cancers? Naveris, a commercial stage, precision oncology diagnostics company with facilities in Boston, MA and Durham, NC, is looking for a Senior Digital Marketing Associate team member to help us advance our mission […]
  • About the Role We’re looking for a data-driven Marketing Strategist to support leadership and assist with optimizing our paid and organic growth efforts. This role sits at the intersection of PPC strategy, SEO execution, and performance analysis—ideal for someone who loves turning insights into measurable results. You’ll be responsible for documenting, executing, and optimizing campaigns […]
  • Job Description Salary: $75,000-$90,000 Hanson is seeking a data-driven strategist to join our team as a Digital Marketing Strategist. This role bridges the gap between marketing strategy, analytics and technology to help ensure our clients websites and digital tools perform at their highest potential. Youll work closely with cross-functional teams to optimize digital experiences, drive […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Summary If you are a person that has work ethic, wants to really grow a company along with personal and financial may be your company. We are seeking a dynamic and creative Social Media and Marketing Specialist to lead our digital marketing efforts. This role involves developing and executing innovative social media strategies, managing […]
  • About Rock Salt Marketing Rock Salt Marketing was founded in 2023 by digital marketing experts that wanted to break from the industry norms by treating people right and providing the quality services that clients expect for honest fees. At Rock Salt Marketing, we prioritize our relationships with both clients and team members, and are committed […]
  • Type: Remote (Full-Time) Salary: Up to $1,500/month (MAX) Start: Immediate Responsibilities Launch and manage Meta Ads campaigns (Facebook/Instagram) Launch and manage Google Ads Search campaigns Build retargeting + conversion tracking systems Daily optimization focused on ROI and lead quality Manage multiple client accounts under performance expectations Weekly reporting with clear actions and next steps Requirements […]
  • Job Description At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain Data Unification, and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it’s needed, empowering data and analytics leaders with unparalleled […]
  • Job Description Paid Media Manager Location: Dallas, TX (In-Office) Compensation: $60,000–$65,000 base salary (commensurate with experience) About the Opportunity Symbiotic Services is partnering with a growing digital marketing agency to identify a Paid Media Manager for an in-office role in Dallas. This position is hands-on and execution-focused, supporting multiple client accounts while collaborating closely with […]

Other roles you may be interested in

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Performance Max built-in A/B testing for creative assets spotted

Why campaign-specific goals matter in Google Ads

Google is rolling out a beta feature that lets advertisers run structured A/B tests on creative assets within a single Performance Max asset group. Advertisers can split traffic between two asset sets and measure performance in a controlled experiment.

Why we care. Creative testing inside Performance Max has mostly relied on guesswork. Google’s new native A/B asset experiments bring controlled testing directly into PMax — without spinning up separate campaigns.

How it works. Advertisers choose one Performance Max campaign and asset group, then define a control asset set (existing creatives) and a treatment set (new alternatives). Shared assets can run across both versions. After setting a traffic split — such as 50/50 — the experiment runs for several weeks before advertisers apply the winning assets.

Why this helps. Running tests inside the same asset group isolates creative impact and reduces noise from structural campaign changes. The controlled split gives clearer reporting and helps teams make rollout decisions based on performance data rather than assumptions.

Early lessons. Initial testing suggests short experiments — especially under three weeks — often produce unstable results, particularly in lower-volume accounts. Longer runs and avoiding simultaneous campaign changes improve reliability.

Bottom line. Performance Max is becoming more testable. Advertisers can now validate creative decisions with built-in experiments instead of relying on trial and error.

First seen. Google Ads expert spotted the update and shared his view on LinkedIn.

Google Ads adds a diagnostics hub for data connections

Top 5 Google Ads opportunities you might be missing

Google Ads rolled out a new data source diagnostics feature in Data Manager that lets advertisers track the health of their data connections. The tool flags problems with offline conversions, CRM imports, and tagging mismatches.

How it works. A centralized dashboard assigns clear connection status labels — Excellent, Good, Needs attention, or Urgent — and surfaces actionable alerts. Advertisers can spot issues like refused credentials, formatting errors, and failed imports, alongside a run history that shows recent sync attempts and error counts.

Why we care. When conversion data breaks, campaign optimization breaks with it. Even small connection failures can quietly skew conversion tracking and weaken automated bidding. This diagnostic tool helps teams catch and fix issues early, protecting performance and reporting accuracy. If you rely on CRM imports or offline conversions, this provides a much-needed safety net.

Who benefits most. The feature is especially useful for advertisers running complex conversion pipelines, including Salesforce integrations and offline attribution setups, where small disruptions can quickly cascade into bidding and reporting issues.

The bigger picture. As automated bidding leans more heavily on accurate first-party data, visibility into data pipelines is becoming just as critical as campaign settings themselves.

Bottom line. Google Ads is giving advertisers an early warning system for data failures, helping teams fix broken connections before performance takes a hit.

First seen. The update was first spotted by digital marketer Georgi Zayakov, who shared the new option on LinkedIn.

Performance Max reporting for ecommerce: What Google is and isn’t showing you

Performance Max has come a long way since its rocky launch. Many advertisers once dismissed it as a half-baked product, but Google has spent the past 18 months fixing real issues around transparency and control. If you wrote Performance Max off before, it’s time to take another look.

Mike Ryan, head of ecommerce insights at Smarter Ecommerce, explained why at the latest SMX Next.

Taking a fresh look at Performance Max

Performance Max traces its roots to Smart Shopping campaigns, which Google rolled out with red carpet fanfare at Google Marketing Live in 2019.

Even then, industry experts warned that transparency and control would become serious issues. They were right — and only now has Google begun to address those concerns openly.

Smart Shopping marked the low point of black-box advertising in Google Ads, at least for ecommerce. It stripped away nearly every control advertisers relied on in Standard Shopping:

  • Promotional controls.
  • Modifiers.
  • Negative keywords.
  • Search terms reporting.
  • Placement reporting.
  • Channel visibility.

Over the past 18 months, Performance Max has brought most of that functionality back, either partially or in full.

Understanding Performance Max search terms

Search terms are a core signal for understanding the traffic you’re actually buying. In Performance Max, most spend typically flows to the search network, which makes search term reporting essential for meaningful optimization.

Google even introduced a Performance Max match type — something few of us ever expected to see. That’s a big deal. It delivers properly reportable data that works with the API, should be scriptable, and finally includes cost and time dimensions that were completely missing before.

Search term insights vs. campaign search term view

Google’s first move to crack open the black box was search term insights. These insights group queries into search categories — essentially prebuilt n-grams — that roll up data at a mid-level and automatically account for typos, misspellings, and variants.

The problem? The metrics are thin. There’s no cost data, which means no CPC, no ROAS, and no real way to evaluate performance.

The real breakthrough is the new campaign-level search term view, now available in both the API and the UI.

Historically, search term reporting lived at the ad group level. Since Performance Max doesn’t use ad groups, that data had nowhere to go.

Google fixed this by anchoring search terms at the campaign level instead. The result is access to far more segments and metrics — and, finally, proper reporting we can actually use.

The main limitation: this data is available only at the search network level, without separating search from shopping. That means a single search term may reflect blended performance from both formats, rather than a clean view of how each one performed.

Search theme reporting

Search themes act as a form of positive targeting in Performance Max. You can evaluate how they’re performing through the search term insights report, which includes a Source column showing whether traffic came from your URLs, your assets, or the search themes you provided.

By totaling conversion value and conversions, you can see whether your search themes are actually driving results — or just sitting idle.

There’s more good news ahead. Google appears to be working on bringing Dynamic Search Ads and AI Max reports into Performance Max. That would unlock visibility into headlines, landing pages, and the search terms triggering ads.

Search term controls and optimization

Negative keywords

Negative keywords are now fully supported in Performance Max. At launch, Google capped campaigns at 100 negatives, offered no API access, and blocked negative keyword lists—clearly positioning the feature for brand safety, not performance.

That’s changed. Negative keywords now work with the API, support shared lists, and give advertisers real control over performance.

These negatives apply across the entire search network, including both search and shopping. Brand exclusions are the exception — you can choose to apply those only to search campaigns if needed.

Brand exclusions

Performance Max doesn’t separate brand from generic traffic, and it often favors brand queries because they’re high intent and tend to perform well. Brand exclusions exist, but they can be leaky, with some brand traffic still slipping through. If you need strict control, negative keywords are the more reliable option.

Also, Performance Max — and AI Max — may aggressively bid on competitor terms. That makes brand and competitor exclusions important tools for protecting spend and shaping intent.

Optimization strategy

Here’s a simple heuristic for spotting search terms that need attention:

  • Calculate the average number of clicks it takes to generate a conversion.
  • Identify search terms with more clicks than that average but zero conversions.

Those terms have had a fair chance to perform and didn’t. They’re strong candidates for negative keywords.

That said, don’t overcorrect.

Long-tail dynamics mean a search term that doesn’t convert this month may matter next month. You’re also working with a finite set of negative keywords, so use them deliberately and prioritize the highest-impact exclusions.

Modern optimization approaches

It’s not 2018 anymore — you shouldn’t spend hours manually reviewing search terms. Automate the work instead.

Use the API for high-volume accounts, scripts for medium volume, and automated reports from the Report Editor for smaller accounts (though it still doesn’t support Performance Max).

Layer in AI for semantic review to flag irrelevant terms based on meaning and intent, then step in only for final approval. Search term reporting can be tedious, but with Google’s prebuilt n-grams and modern AI tools, there’s a smarter way to handle it.

Channels and placements reporting

Channel performance report

The channel performance report — not just for Performance Max — breaks performance out by network, including Discover, Display, Gmail, and more. It’s useful for channel visibility and understanding view-through versus click-through conversions, as well as how feed-based delivery compares to asset-driven performance.

The report includes a Sankey diagram, but it isn’t especially intuitive. The labeling is confusing and takes some decoding:

  • Search Network: Feed-based equals Shopping ads; asset-based equals RSAs and DSAs.
  • Display Network: Feed-based equals dynamic remarketing; asset-based equals responsive display ads.

Google also announced that Search Partner Network data is coming, which should add another layer of useful performance visibility.

Channel and placement controls

Unlike Demand Gen, where you can choose exactly which channels to run on, Performance Max doesn’t give you that control. You can try to influence the channel mix through your ROAS target and budget, but it’s a blunt instrument — and a slippery one at best.

Placement exclusions

The strongest control you have is excluding specific placements. Placement data is now available through the API — limited to impressions and date segments — and can also be reviewed in the Report Editor. Use this data alongside the content suitability view to spot questionable domains and spammy placements.

For YouTube, pay close attention to political and children’s content. If a placement feels irrelevant or unsafe for your brand, there’s a good chance it isn’t driving meaningful performance either.

Tools for placement review

If you run into YouTube videos in languages you don’t speak, use Google Sheets’ built-in GOOGLETRANSLATE function. It’s faster and more reliable than AI for quick translation.

You can also use AI-powered formulas in Sheets to do semantic triage on placements, not just search terms. These tools are just formulas, which means this kind of analysis is accessible to anyone.

Search Partner Network

Unfortunately, there’s no way to opt out of the Search Partner Network in Performance Max. You can exclude individual search partners, but there are limits.

Prioritize exclusions based on how questionable the placement looks and how much volume it’s receiving. Also note that Google-owned properties like YouTube and Gmail can’t be excluded.

Based on Standard Shopping data, the Search Partner Network consistently performs meaningfully worse than the Google Search Network. Excluding poor performers is recommended.

Device reporting and targeting

Creating a device report is easy — just add device as a segment in the “when and where ads showed” view. The tricky part is making decisions.

Device analysis

For deeper insight, dig into item-level performance in the Report Editor. Add device as a segment alongside item ID and product titles to see how individual products behave across devices. Also, compare competitor performance by device — you may spot meaningful differences that inform your strategy.

For example, you may perform far better on desktop than on mobile compared to competitors like Amazon, signaling either an opportunity or a risk.

Device targeting considerations

Device targeting is available in Performance Max and is easy to use, much like channel targeting in Demand Gen. But when you split campaigns by device, you also split your conversion data and volume—and that can hurt results.

Before you separate campaigns by device, consider:

  • How competition differs by device
  • Performance at the item and retail category level
  • The impact on overall data volume

Performance Max performs best with more data. Campaigns with low monthly conversion volume often miss their targets and rarely stay on pace. As more data flows through a campaign, Performance Max gets better at hitting goals and less likely to fall short.

Any gains from splitting by device can disappear if the algorithm doesn’t have enough data to learn. Only split when both resulting campaigns have enough volume to support effective machine learning.

Conclusion

Performance Max has changed dramatically since launch. With search term reporting, negative keywords, channel visibility, placement controls, and device targeting now available, advertisers have far more transparency and control than ever before.

It’s still not perfect — channel targeting limits and data fragmentation remain — but Performance Max is fundamentally different and far more manageable.

Success comes down to knowing what data you have, how to access it efficiently using modern tools like AI and automation, and when to apply controls based on performance insights and data volume needs.

Watch: PMax reporting for ecommerce: What Google is (and isn’t) showing you

💾

Explore how to make smarter use of search terms, channel and placement reports, and device-level performance to improve campaign control.

Why content that ranks can still fail AI retrieval

Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

Google & Bing don’t recommend separate markdown pages for LLMs

Representatives from both the Google Search and Bing Search teams are recommending against creating separate markdown (.md) pages for LLM purposes. The purpose is to serve one piece of content to the LLM and another piece of content to your users, which technically may be considered a form of cloaking and against Google’s policies.

The question. Lily Ray asked on Bluesky:

  • “Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots.”

Google’s response. John Mueller from Google responded saying:

  • “I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

Recently, John Mueller also called the idea stupid, saying:

  • “Converting pages to markdown is such a stupid idea. Did you know LLMs can read images? WHY NOT TURN YOUR WHOLE SITE INTO AN IMAGE?” That is of course, converting your whole site to an MD file, which is a bit extreme, to say the least.

I did collect a lot of John Mueller’s comments on this topic, over here.

Bing’s response. Fabrice Canel from Microsoft Bing responded saying:

  • “Lily: really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”

Why we care. Some of us like to look for shortcuts to perform well on search engines and now the new AI search engines and LLMs. Generally, shortcuts, if they work, only work for a limited time. Plus, these shortcuts can have an unexpected negative effect.

As Lily Ray wrote on LinkedIn:

  • “I’ve had concerns the entire time about managing duplicate content and serving different content to crawlers than to humans, which I understand might be useful for AI search but directly violates search engines’ longstanding policies about this (basically cloaking).”

Your local rankings look fine. So why are calls disappearing?

Local SEO Alligator

For many local businesses, performance looks healthier than it is.

Rank trackers still show top-three positions. Visibility reports appear steady. Yet calls and website visits from Google Business Profiles are falling — sometimes fast.

This gap is becoming a defining feature of local search today.

Rankings are holding. Visibility and performance aren’t.

The alligator has arrived in local SEO.

The visibility crisis behind stable rankings

Across multiple U.S. industries, traditional local 3-packs are being replaced — or at least supplemented — by AI-powered local packs. These layouts behave differently from the map results we’ve optimized in the past.

Analysis from Sterling Sky, based on 179 Google Business Profiles, reveals a pattern that’s hard to ignore. Clicks-to-call are dropping sharply for Jepto-managed law firms.

When AI-powered packs replace traditional listings, the landscape shifts in four critical ways:

  • Shrinking real estate: AI packs often surface only two businesses instead of three.
  • Missing call buttons: Many AI-generated summaries remove instant click-to-call options, adding friction to the customer journey.
  • Different businesses appear: The businesses shown in AI packs often don’t match those in the traditional 3-pack.
  • Accelerated monetization of local search: When paid ads are present, traditional 3-packs increasingly lose direct call and website buttons, reducing organic conversion opportunities.

A fifth issue compounds the problem:

  • Measurement blind spots: Most rank trackers don’t yet report on AI local packs. A business may rank first in a 3-pack that many users never see.

AI local packs surfaced only 32% as many unique businesses as traditional map packs in 2026, according to Sterling Sky. In 88% of the 322 markets analyzed, the total number of visible businesses declined.

At the same time, paid ads continue to take over space once reserved for organic results, signaling a clear shift toward a pay-to-play local landscape.

What Google Business Profile data shows

The same pattern appears, especially in the U.S., where Google is aggressively testing new local formats, according to GMBapi.com data. Traditional local 3-pack impressions are increasingly displaced by:

  • AI-powered local packs.
  • Paid placements inside traditional map packs: Sponsored listings now appear alongside or within the map pack, pushing organic results lower and stripping listings of call and website buttons. This breaks organic customer journeys.
  • Expanded Google Ads units: Including Local Services Ads that consume space once reserved for organic visibility.

Impression trends still fluctuate due to seasonality, market differences, and occasional API anomalies. But a much clearer signal emerges when you look at GBP actions rather than impressions.

Mentions inside AI-generated results are still counted as impressions — even when they no longer drive calls, clicks, or visits.

Some fluctuations are driven by external factors. For example, the June drop ties back to a known Google API issue. Mobile Maps impressions also appear heavily influenced by large advertisers ramping up Google Ads later in the year.

There’s no way to segment these impressions by Google Ads, organic results, or AI Mode.

Even there, however, user behaviour is changing. Interaction rates are declining, with fewer direct actions taken from local listings.

Year-on-year comparisons in the US suggest that while impression losses remain moderate and partially seasonal, GBP actions are disproportionately impacted.

As a counterfactual, data from the Dutch market — where SERP experimentation remains limited — shows far more stable action trends.

The pattern is clear. AI-driven SERP changes, expanding Google Ads, and the removal of call and website buttons from the Map Pack are shrinking organic real estate. Even when visibility looks intact, businesses have fewer chances to earn real user actions.

Local SEO is becoming an eligibility problem

Historically, local optimization centered on familiar ranking factors: proximity, relevance, prominence, reviews, citations, and engagement.

Today, another layer sits above all of them: eligibility.

Many businesses fail to appear in AI-powered local results not because they lack authority, but because Google’s systems decide they aren’t an appropriate match for the specific query context. Research from Yext and insights from practitioners like Claudia Tomina highlight the importance of alignment across three core signals:

  • Business name
  • Primary category
  • Real-world services and positioning

When these fundamentals are misaligned, businesses can be excluded from entire result types — no matter how well optimized the Google Business Profile itself may be.

How to future-proof local visibility

Surviving today’s zero-click reality means moving beyond reliance on a single, perfectly optimized Google Business Profile. Here’s your new local SEO playbook.

The eligibility gatekeeper

Failure to appear in local packs is now driven more by perceived relevance and classification than by links or review volume.

Hyper-local entity authority

AI systems cross-reference Reddit, social platforms, forums, and local directories to judge whether a business is legitimate and active. Inconsistent signals across these ecosystems quietly erode visibility.

Visual trust signals

High-quality, frequently updated photos, and increasingly video, are no longer optional. Google’s AI analyzes visual content to infer services, intent, and categorization.

Embrace the pay-to-play reality

It’s a hard truth, but Google Ads — especially Local Services Ads — are now critical to retaining prominent call buttons that organic listings are losing. A hybrid strategy that blends local SEO with paid search isn’t optional. It’s the baseline.

What this means for local search now

Local SEO is no longer a static directory exercise. Google Business Profiles still anchor local discoverability, but they now operate inside a much broader ecosystem shaped by AI validation, constant SERP experimentation, and Google’s accelerating push to monetize local search.

Discovery no longer hinges on where your GBP ranks against nearby competitors. Search systems — including Google’s AI-driven SERP features and large language models like ChatGPT and Gemini — are increasingly trying to understand what a business actually does, not just where it’s listed.

Success is no longer about being the most “optimized” profile. It’s about being widely verified, consistently active, and contextually relevant across the AI-visible ecosystem.

Our observations show little correlation between businesses that rank well in the traditional Map Pack and those favored by Google’s AI-generated local answers that are beginning to replace it. That gap creates a real opportunity for businesses willing to adapt.

In practice, this means pairing local input with central oversight.

Authentic engagement across multiple platforms, locally differentiated content, and real community signals must coexist with brand governance, data consistency, and operational scale. For single-location businesses with deep community roots, this is an advantage. Being genuinely discussed, recommended, and referenced in your local area — online and offline — gets you halfway there.

For agencies and multi-location brands, the challenge is to balance control with local nuance and ensure trusted signals extend beyond Google (e.g., Apple Maps, Tripadvisor, Yelp, Reddit, and other relevant review ecosystems). The real test is producing locally relevant content and citations at scale without losing authenticity.

Rankings may look stable. But performance increasingly lives somewhere else.

The full data. Local SEO in 2026: Why Your Rankings are Steady but Your Calls are Vanishing

❌