Normal view

Yesterday — 7 February 2026Search Engine Land

Amanda Farley talks broken pixels and calm leadership

7 February 2026 at 06:27

On episode 340 of PPC Live The Podcast, I speak to Amanda Farley, CMO of Aimclear and a multi-award-winning marketing leader, brings a mix of honesty and expertise to the PPC Live conversation. A self-described T-shaped marketer, she combines deep PPC knowledge with broad experience across social, programmatic, PR, and integrated strategy. Her journey — from owning an gallery and tattoo studio to leading award-winning global campaigns — reflects a career built on curiosity, resilience, and continuous learning.

Overcoming limiting beliefs and embracing creativity

Amanda once ran an gallery and tattoo parlor while believing she wasn’t an artist herself. Surrounded by creatives, she eventually realized her only barrier was a limiting belief. After embracing painting, she created hundreds of artworks and discovered a powerful outlet for expression.

This mindset shift mirrors marketing growth. Success isn’t just technical — it’s mental. By challenging internal doubts, marketers can unlock new skills and opportunities.

When campaign infrastructure breaks: A high-stakes lesson

Amanda recalls a global campaign where tracking infrastructure failed across every channel mid-flight. Pixels broke, data vanished, and campaigns were running blind. Multiple siloed teams and a third-party vendor slowed resolution while budgets continued to spend.

Instead of assigning blame, Amanda focused on collaboration. Her team helped rebuild tracking and uncovered deeper data architecture issues. The crisis led to stronger onboarding processes, earlier validation checks, and clearer expectations around data hygiene. In modern PPC, clean infrastructure is essential for machine learning success.

The hidden importance of PPC hygiene

Many account audits reveal the same problem: neglected fundamentals. Basic settings errors and poorly maintained audience data often hurt performance before strategy even begins.

Outdated lists and disconnected data systems weaken automation. In an machine-learning environment, strong data hygiene ensures campaigns have the quality signals they need to perform.

Why integrated marketing is no longer optional

Amanda’s background in psychology and SEO shaped her integrated approach. PPC touches landing pages, user experience, and sales processes. When conversions drop, the issue may lie outside the ad account.

Understanding the full customer journey allows marketers to diagnose problems holistically. For Amanda, integration is a practical necessity, not a buzzword.

AI, automation, and the human factor

While AI dominates industry conversations, Amanda stresses balance. Some tools are promising, but not all are ready for full deployment. Testing is essential, but human oversight remains critical.

Machines optimize patterns, but humans judge emotion, messaging, and brand fit. Marketers who study changing customer journeys can also find new opportunities to intercept audiences across channels.

Building a culture that welcomes mistakes

Amanda believes leaders act as emotional barometers. Calm investigation beats reactive blame when issues arise. Many PPC problems stem from external changes, not individual failure.

By acknowledging stress and focusing on solutions, leaders create psychological safety. This environment encourages experimentation and turns mistakes into learning opportunities.

Testing without fear in an changing landscape

Marketing is entering another experimental era with no clear rulebook. Amanda encourages teams to dedicate budget to testing and lean on professional communities for insight.

Not every experiment will succeed, but each provides data that informs smarter future decisions.

The tasmanian devil who practices yoga

Amanda describes her career as If the Tasmanian Devil Could Do Yoga — a blend of fast-paced chaos and intentional calm. It reflects modern marketing: demanding, unpredictable, and balanced by thoughtful leadership.

💾

Amanda Farley shares lessons on overcoming setbacks and balancing AI with human insight in modern marketing leadership.

The latest jobs in search marketing

7 February 2026 at 00:02
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About the Role We’re looking for a data-driven Marketing Strategist to support leadership and assist with optimizing our paid and organic growth efforts. This role sits at the intersection of PPC strategy, SEO execution, and performance analysis—ideal for someone who loves turning insights into measurable results. You’ll be responsible for documenting, executing, and optimizing campaigns […]
  • Job Description Salary: $75,000-$90,000 Hanson is seeking a data-driven strategist to join our team as a Digital Marketing Strategist. This role bridges the gap between marketing strategy, analytics and technology to help ensure our clients websites and digital tools perform at their highest potential. Youll work closely with cross-functional teams to optimize digital experiences, drive […]
  • Join Aya Healthcare, winner of multiple Top Workplace awards! We’re seeking a motivated SEO Strategist to join our fast-paced marketing team and help drive organic growth across multiple healthcare brands and websites under the Aya Healthcare umbrella. This role offers an exceptional opportunity to gain comprehensive corporate SEO experience while working alongside industry-leading professionals. Reporting […]
  • Who We Are With a legacy spanning four decades, Action Property Management has become the premier choice for homeowner’s association management. Founded in 1984, Action began with a single client and a vision to elevate ethical and professional standards in the HOA industry. Our unwavering commitment to integrity, and professionalism coupled with our core values […]
  • Job Description PLUS Incentive & Rich Benefit Plan Position Summary The Digital Marketing Manager is a key role responsible for the strategy, execution, and optimization of Olympic Hot Tub’s digital marketing efforts. You will work closely with the Company President and external partners to develop and manage cohesive digital campaigns that drive qualified traffic, generate […]
  • Job Description At VAL-CO we work together as a global leader in providing innovative, value-focused products and services to the poultry, livestock and horticultural industries. We believe in all that we do by valuing people, integrity, quality, profitability, and stewardship. VAL-CO recognizes the importance and value of our employees and their families, and our customers […]
  • POSITION DESCRIPTION Position: Website Content Manager Department: Office of Communications and Public Relations Reports To: Executive Director of Communications and Public Relations Classification: Exempt General Description The Website Content Manager develops, maintains, and optimizes archdiocesan websites and content to shape our online presence and ensure they align with and support the mission and priorities of […]
  • JobType: Full-Time (Exempt) Salary: $62,000 – $67,000 The Performance Marketing Specialist is responsible for optimizing QuaverEd’s website experiences to drive lead generation, trial conversion, and overall marketing performance. This role combines analytical insight, SEO strategy, and conversion rate optimization to improve how users discover, engage with, and move through QuaverEd’s digital funnel. Working closely with […]
  • Join our Team – Come for a Job Stay for a Career! Wearwell is a global industry leader in the anti-fatigue matting market. Our team members are more than just another number – they are family. As our business grows, so must we. We are seeking a Digital Marketing and E-Commerce Specialist to join our […]
  • We are looking for an experienced Senior SEO Specialist to lead advanced SEO strategy development, oversee multiple client projects, and drive measurable results in organic performance. This is a leadership-oriented position for a professional who combines deep technical expertise, strong analytical thinking, and strategic vision. As a Senior SEO Specialist, you’ll take ownership of SEO processes from comprehensive audits to keyword strategy, content architecture, and reporting while mentoring […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

Other roles you may be interested in

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Performance Max built-in A/B testing for creative assets spotted

6 February 2026 at 23:29
Why campaign-specific goals matter in Google Ads

Google is rolling out a beta feature that lets advertisers run structured A/B tests on creative assets within a single Performance Max asset group. Advertisers can split traffic between two asset sets and measure performance in a controlled experiment.

Why we care. Creative testing inside Performance Max has mostly relied on guesswork. Google’s new native A/B asset experiments bring controlled testing directly into PMax — without spinning up separate campaigns.

How it works. Advertisers choose one Performance Max campaign and asset group, then define a control asset set (existing creatives) and a treatment set (new alternatives). Shared assets can run across both versions. After setting a traffic split — such as 50/50 — the experiment runs for several weeks before advertisers apply the winning assets.

Why this helps. Running tests inside the same asset group isolates creative impact and reduces noise from structural campaign changes. The controlled split gives clearer reporting and helps teams make rollout decisions based on performance data rather than assumptions.

Early lessons. Initial testing suggests short experiments — especially under three weeks — often produce unstable results, particularly in lower-volume accounts. Longer runs and avoiding simultaneous campaign changes improve reliability.

Bottom line. Performance Max is becoming more testable. Advertisers can now validate creative decisions with built-in experiments instead of relying on trial and error.

First seen. Google Ads expert spotted the update and shared his view on LinkedIn.

Before yesterdaySearch Engine Land

Google Ads adds a diagnostics hub for data connections

6 February 2026 at 22:52
Top 5 Google Ads opportunities you might be missing

Google Ads rolled out a new data source diagnostics feature in Data Manager that lets advertisers track the health of their data connections. The tool flags problems with offline conversions, CRM imports, and tagging mismatches.

How it works. A centralized dashboard assigns clear connection status labels — Excellent, Good, Needs attention, or Urgent — and surfaces actionable alerts. Advertisers can spot issues like refused credentials, formatting errors, and failed imports, alongside a run history that shows recent sync attempts and error counts.

Why we care. When conversion data breaks, campaign optimization breaks with it. Even small connection failures can quietly skew conversion tracking and weaken automated bidding. This diagnostic tool helps teams catch and fix issues early, protecting performance and reporting accuracy. If you rely on CRM imports or offline conversions, this provides a much-needed safety net.

Who benefits most. The feature is especially useful for advertisers running complex conversion pipelines, including Salesforce integrations and offline attribution setups, where small disruptions can quickly cascade into bidding and reporting issues.

The bigger picture. As automated bidding leans more heavily on accurate first-party data, visibility into data pipelines is becoming just as critical as campaign settings themselves.

Bottom line. Google Ads is giving advertisers an early warning system for data failures, helping teams fix broken connections before performance takes a hit.

First seen. The update was first spotted by digital marketer Georgi Zayakov, who shared the new option on LinkedIn.

Performance Max reporting for ecommerce: What Google is and isn’t showing you

6 February 2026 at 22:13

Performance Max has come a long way since its rocky launch. Many advertisers once dismissed it as a half-baked product, but Google has spent the past 18 months fixing real issues around transparency and control. If you wrote Performance Max off before, it’s time to take another look.

Mike Ryan, head of ecommerce insights at Smarter Ecommerce, explained why at the latest SMX Next.

Taking a fresh look at Performance Max

Performance Max traces its roots to Smart Shopping campaigns, which Google rolled out with red carpet fanfare at Google Marketing Live in 2019.

Even then, industry experts warned that transparency and control would become serious issues. They were right — and only now has Google begun to address those concerns openly.

Smart Shopping marked the low point of black-box advertising in Google Ads, at least for ecommerce. It stripped away nearly every control advertisers relied on in Standard Shopping:

  • Promotional controls.
  • Modifiers.
  • Negative keywords.
  • Search terms reporting.
  • Placement reporting.
  • Channel visibility.

Over the past 18 months, Performance Max has brought most of that functionality back, either partially or in full.

Understanding Performance Max search terms

Search terms are a core signal for understanding the traffic you’re actually buying. In Performance Max, most spend typically flows to the search network, which makes search term reporting essential for meaningful optimization.

Google even introduced a Performance Max match type — something few of us ever expected to see. That’s a big deal. It delivers properly reportable data that works with the API, should be scriptable, and finally includes cost and time dimensions that were completely missing before.

Search term insights vs. campaign search term view

Google’s first move to crack open the black box was search term insights. These insights group queries into search categories — essentially prebuilt n-grams — that roll up data at a mid-level and automatically account for typos, misspellings, and variants.

The problem? The metrics are thin. There’s no cost data, which means no CPC, no ROAS, and no real way to evaluate performance.

The real breakthrough is the new campaign-level search term view, now available in both the API and the UI.

Historically, search term reporting lived at the ad group level. Since Performance Max doesn’t use ad groups, that data had nowhere to go.

Google fixed this by anchoring search terms at the campaign level instead. The result is access to far more segments and metrics — and, finally, proper reporting we can actually use.

The main limitation: this data is available only at the search network level, without separating search from shopping. That means a single search term may reflect blended performance from both formats, rather than a clean view of how each one performed.

Search theme reporting

Search themes act as a form of positive targeting in Performance Max. You can evaluate how they’re performing through the search term insights report, which includes a Source column showing whether traffic came from your URLs, your assets, or the search themes you provided.

By totaling conversion value and conversions, you can see whether your search themes are actually driving results — or just sitting idle.

There’s more good news ahead. Google appears to be working on bringing Dynamic Search Ads and AI Max reports into Performance Max. That would unlock visibility into headlines, landing pages, and the search terms triggering ads.

Search term controls and optimization

Negative keywords

Negative keywords are now fully supported in Performance Max. At launch, Google capped campaigns at 100 negatives, offered no API access, and blocked negative keyword lists—clearly positioning the feature for brand safety, not performance.

That’s changed. Negative keywords now work with the API, support shared lists, and give advertisers real control over performance.

These negatives apply across the entire search network, including both search and shopping. Brand exclusions are the exception — you can choose to apply those only to search campaigns if needed.

Brand exclusions

Performance Max doesn’t separate brand from generic traffic, and it often favors brand queries because they’re high intent and tend to perform well. Brand exclusions exist, but they can be leaky, with some brand traffic still slipping through. If you need strict control, negative keywords are the more reliable option.

Also, Performance Max — and AI Max — may aggressively bid on competitor terms. That makes brand and competitor exclusions important tools for protecting spend and shaping intent.

Optimization strategy

Here’s a simple heuristic for spotting search terms that need attention:

  • Calculate the average number of clicks it takes to generate a conversion.
  • Identify search terms with more clicks than that average but zero conversions.

Those terms have had a fair chance to perform and didn’t. They’re strong candidates for negative keywords.

That said, don’t overcorrect.

Long-tail dynamics mean a search term that doesn’t convert this month may matter next month. You’re also working with a finite set of negative keywords, so use them deliberately and prioritize the highest-impact exclusions.

Modern optimization approaches

It’s not 2018 anymore — you shouldn’t spend hours manually reviewing search terms. Automate the work instead.

Use the API for high-volume accounts, scripts for medium volume, and automated reports from the Report Editor for smaller accounts (though it still doesn’t support Performance Max).

Layer in AI for semantic review to flag irrelevant terms based on meaning and intent, then step in only for final approval. Search term reporting can be tedious, but with Google’s prebuilt n-grams and modern AI tools, there’s a smarter way to handle it.

Channels and placements reporting

Channel performance report

The channel performance report — not just for Performance Max — breaks performance out by network, including Discover, Display, Gmail, and more. It’s useful for channel visibility and understanding view-through versus click-through conversions, as well as how feed-based delivery compares to asset-driven performance.

The report includes a Sankey diagram, but it isn’t especially intuitive. The labeling is confusing and takes some decoding:

  • Search Network: Feed-based equals Shopping ads; asset-based equals RSAs and DSAs.
  • Display Network: Feed-based equals dynamic remarketing; asset-based equals responsive display ads.

Google also announced that Search Partner Network data is coming, which should add another layer of useful performance visibility.

Channel and placement controls

Unlike Demand Gen, where you can choose exactly which channels to run on, Performance Max doesn’t give you that control. You can try to influence the channel mix through your ROAS target and budget, but it’s a blunt instrument — and a slippery one at best.

Placement exclusions

The strongest control you have is excluding specific placements. Placement data is now available through the API — limited to impressions and date segments — and can also be reviewed in the Report Editor. Use this data alongside the content suitability view to spot questionable domains and spammy placements.

For YouTube, pay close attention to political and children’s content. If a placement feels irrelevant or unsafe for your brand, there’s a good chance it isn’t driving meaningful performance either.

Tools for placement review

If you run into YouTube videos in languages you don’t speak, use Google Sheets’ built-in GOOGLETRANSLATE function. It’s faster and more reliable than AI for quick translation.

You can also use AI-powered formulas in Sheets to do semantic triage on placements, not just search terms. These tools are just formulas, which means this kind of analysis is accessible to anyone.

Search Partner Network

Unfortunately, there’s no way to opt out of the Search Partner Network in Performance Max. You can exclude individual search partners, but there are limits.

Prioritize exclusions based on how questionable the placement looks and how much volume it’s receiving. Also note that Google-owned properties like YouTube and Gmail can’t be excluded.

Based on Standard Shopping data, the Search Partner Network consistently performs meaningfully worse than the Google Search Network. Excluding poor performers is recommended.

Device reporting and targeting

Creating a device report is easy — just add device as a segment in the “when and where ads showed” view. The tricky part is making decisions.

Device analysis

For deeper insight, dig into item-level performance in the Report Editor. Add device as a segment alongside item ID and product titles to see how individual products behave across devices. Also, compare competitor performance by device — you may spot meaningful differences that inform your strategy.

For example, you may perform far better on desktop than on mobile compared to competitors like Amazon, signaling either an opportunity or a risk.

Device targeting considerations

Device targeting is available in Performance Max and is easy to use, much like channel targeting in Demand Gen. But when you split campaigns by device, you also split your conversion data and volume—and that can hurt results.

Before you separate campaigns by device, consider:

  • How competition differs by device
  • Performance at the item and retail category level
  • The impact on overall data volume

Performance Max performs best with more data. Campaigns with low monthly conversion volume often miss their targets and rarely stay on pace. As more data flows through a campaign, Performance Max gets better at hitting goals and less likely to fall short.

Any gains from splitting by device can disappear if the algorithm doesn’t have enough data to learn. Only split when both resulting campaigns have enough volume to support effective machine learning.

Conclusion

Performance Max has changed dramatically since launch. With search term reporting, negative keywords, channel visibility, placement controls, and device targeting now available, advertisers have far more transparency and control than ever before.

It’s still not perfect — channel targeting limits and data fragmentation remain — but Performance Max is fundamentally different and far more manageable.

Success comes down to knowing what data you have, how to access it efficiently using modern tools like AI and automation, and when to apply controls based on performance insights and data volume needs.

Watch: PMax reporting for ecommerce: What Google is (and isn’t) showing you

💾

Explore how to make smarter use of search terms, channel and placement reports, and device-level performance to improve campaign control.

Why content that ranks can still fail AI retrieval

6 February 2026 at 19:00
Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

6 February 2026 at 18:00
How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

6 February 2026 at 17:00
Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

Google & Bing don’t recommend separate markdown pages for LLMs

6 February 2026 at 16:24

Representatives from both the Google Search and Bing Search teams are recommending against creating separate markdown (.md) pages for LLM purposes. The purpose is to serve one piece of content to the LLM and another piece of content to your users, which technically may be considered a form of cloaking and against Google’s policies.

The question. Lily Ray asked on Bluesky:

  • “Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots.”

Google’s response. John Mueller from Google responded saying:

  • “I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

Recently, John Mueller also called the idea stupid, saying:

  • “Converting pages to markdown is such a stupid idea. Did you know LLMs can read images? WHY NOT TURN YOUR WHOLE SITE INTO AN IMAGE?” That is of course, converting your whole site to an MD file, which is a bit extreme, to say the least.

I did collect a lot of John Mueller’s comments on this topic, over here.

Bing’s response. Fabrice Canel from Microsoft Bing responded saying:

  • “Lily: really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”

Why we care. Some of us like to look for shortcuts to perform well on search engines and now the new AI search engines and LLMs. Generally, shortcuts, if they work, only work for a limited time. Plus, these shortcuts can have an unexpected negative effect.

As Lily Ray wrote on LinkedIn:

  • “I’ve had concerns the entire time about managing duplicate content and serving different content to crawlers than to humans, which I understand might be useful for AI search but directly violates search engines’ longstanding policies about this (basically cloaking).”

Your local rankings look fine. So why are calls disappearing?

5 February 2026 at 22:49
Local SEO Alligator

For many local businesses, performance looks healthier than it is.

Rank trackers still show top-three positions. Visibility reports appear steady. Yet calls and website visits from Google Business Profiles are falling — sometimes fast.

This gap is becoming a defining feature of local search today.

Rankings are holding. Visibility and performance aren’t.

The alligator has arrived in local SEO.

The visibility crisis behind stable rankings

Across multiple U.S. industries, traditional local 3-packs are being replaced — or at least supplemented — by AI-powered local packs. These layouts behave differently from the map results we’ve optimized in the past.

Analysis from Sterling Sky, based on 179 Google Business Profiles, reveals a pattern that’s hard to ignore. Clicks-to-call are dropping sharply for Jepto-managed law firms.

When AI-powered packs replace traditional listings, the landscape shifts in four critical ways:

  • Shrinking real estate: AI packs often surface only two businesses instead of three.
  • Missing call buttons: Many AI-generated summaries remove instant click-to-call options, adding friction to the customer journey.
  • Different businesses appear: The businesses shown in AI packs often don’t match those in the traditional 3-pack.
  • Accelerated monetization of local search: When paid ads are present, traditional 3-packs increasingly lose direct call and website buttons, reducing organic conversion opportunities.

A fifth issue compounds the problem:

  • Measurement blind spots: Most rank trackers don’t yet report on AI local packs. A business may rank first in a 3-pack that many users never see.

AI local packs surfaced only 32% as many unique businesses as traditional map packs in 2026, according to Sterling Sky. In 88% of the 322 markets analyzed, the total number of visible businesses declined.

At the same time, paid ads continue to take over space once reserved for organic results, signaling a clear shift toward a pay-to-play local landscape.

What Google Business Profile data shows

The same pattern appears, especially in the U.S., where Google is aggressively testing new local formats, according to GMBapi.com data. Traditional local 3-pack impressions are increasingly displaced by:

  • AI-powered local packs.
  • Paid placements inside traditional map packs: Sponsored listings now appear alongside or within the map pack, pushing organic results lower and stripping listings of call and website buttons. This breaks organic customer journeys.
  • Expanded Google Ads units: Including Local Services Ads that consume space once reserved for organic visibility.

Impression trends still fluctuate due to seasonality, market differences, and occasional API anomalies. But a much clearer signal emerges when you look at GBP actions rather than impressions.

Mentions inside AI-generated results are still counted as impressions — even when they no longer drive calls, clicks, or visits.

Some fluctuations are driven by external factors. For example, the June drop ties back to a known Google API issue. Mobile Maps impressions also appear heavily influenced by large advertisers ramping up Google Ads later in the year.

There’s no way to segment these impressions by Google Ads, organic results, or AI Mode.

Even there, however, user behaviour is changing. Interaction rates are declining, with fewer direct actions taken from local listings.

Year-on-year comparisons in the US suggest that while impression losses remain moderate and partially seasonal, GBP actions are disproportionately impacted.

As a counterfactual, data from the Dutch market — where SERP experimentation remains limited — shows far more stable action trends.

The pattern is clear. AI-driven SERP changes, expanding Google Ads, and the removal of call and website buttons from the Map Pack are shrinking organic real estate. Even when visibility looks intact, businesses have fewer chances to earn real user actions.

Local SEO is becoming an eligibility problem

Historically, local optimization centered on familiar ranking factors: proximity, relevance, prominence, reviews, citations, and engagement.

Today, another layer sits above all of them: eligibility.

Many businesses fail to appear in AI-powered local results not because they lack authority, but because Google’s systems decide they aren’t an appropriate match for the specific query context. Research from Yext and insights from practitioners like Claudia Tomina highlight the importance of alignment across three core signals:

  • Business name
  • Primary category
  • Real-world services and positioning

When these fundamentals are misaligned, businesses can be excluded from entire result types — no matter how well optimized the Google Business Profile itself may be.

How to future-proof local visibility

Surviving today’s zero-click reality means moving beyond reliance on a single, perfectly optimized Google Business Profile. Here’s your new local SEO playbook.

The eligibility gatekeeper

Failure to appear in local packs is now driven more by perceived relevance and classification than by links or review volume.

Hyper-local entity authority

AI systems cross-reference Reddit, social platforms, forums, and local directories to judge whether a business is legitimate and active. Inconsistent signals across these ecosystems quietly erode visibility.

Visual trust signals

High-quality, frequently updated photos, and increasingly video, are no longer optional. Google’s AI analyzes visual content to infer services, intent, and categorization.

Embrace the pay-to-play reality

It’s a hard truth, but Google Ads — especially Local Services Ads — are now critical to retaining prominent call buttons that organic listings are losing. A hybrid strategy that blends local SEO with paid search isn’t optional. It’s the baseline.

What this means for local search now

Local SEO is no longer a static directory exercise. Google Business Profiles still anchor local discoverability, but they now operate inside a much broader ecosystem shaped by AI validation, constant SERP experimentation, and Google’s accelerating push to monetize local search.

Discovery no longer hinges on where your GBP ranks against nearby competitors. Search systems — including Google’s AI-driven SERP features and large language models like ChatGPT and Gemini — are increasingly trying to understand what a business actually does, not just where it’s listed.

Success is no longer about being the most “optimized” profile. It’s about being widely verified, consistently active, and contextually relevant across the AI-visible ecosystem.

Our observations show little correlation between businesses that rank well in the traditional Map Pack and those favored by Google’s AI-generated local answers that are beginning to replace it. That gap creates a real opportunity for businesses willing to adapt.

In practice, this means pairing local input with central oversight.

Authentic engagement across multiple platforms, locally differentiated content, and real community signals must coexist with brand governance, data consistency, and operational scale. For single-location businesses with deep community roots, this is an advantage. Being genuinely discussed, recommended, and referenced in your local area — online and offline — gets you halfway there.

For agencies and multi-location brands, the challenge is to balance control with local nuance and ensure trusted signals extend beyond Google (e.g., Apple Maps, Tripadvisor, Yelp, Reddit, and other relevant review ecosystems). The real test is producing locally relevant content and citations at scale without losing authenticity.

Rankings may look stable. But performance increasingly lives somewhere else.

The full data. Local SEO in 2026: Why Your Rankings are Steady but Your Calls are Vanishing

Google releases February 2026 Discover core update

5 February 2026 at 21:00

Google has released the February 2026 Discover core update, which focuses specifically on how content is surfaced in Google Discover.

  • “This is a broad update to our systems that surface articles in Discover,” Google wrote.

Google said the update is rolling out first to English-language users in the U.S. and will expand to all countries and languages in the coming months. The rollout may take up to two weeks to complete, Google added.

What is expected. Google said the Discover core update will improve the “experience in a few key ways,” including:

  • Showing users more locally relevant content from websites based in their country.
  • Reducing sensational content and clickbait.
  • Highlighting more in-depth, original, and timely content from sites with demonstrated expertise in a given area, based on Google’s understanding of a site’s content.

Because the update prioritizes locally relevant content, it may reduce traffic for non-U.S. websites that publish news for a U.S. audience. That impact may lessen or disappear as the update expands globally.

More details. Google added that many sites demonstrate deep knowledge across a wide range of subjects, and its systems are built to identify expertise on a topic-by-topic basis. As a result, any site can appear in Discover, whether it covers multiple areas or focuses deeply on a single topic. Google shared an example:

  • “A local news site with a dedicated gardening section could have established expertise in gardening, even though it covers other topics. In contrast, a movie review site that wrote a single article about gardening would likely not.”

Google said it will continue to “show content that’s personalized based on people’s creator and source preferences.”

During testing, Google found that “people find the Discover experience more useful and worthwhile with this update.”

Expect fluctuations. With this Discover core update, expect fluctuations in traffic from Google Discover.

  • “Some sites might see increases or decreases; many sites may see no change at all,” Google said.

Rollout. Google said it is “releasing this update to English language users in the US, and will expand it to all countries and languages in the months ahead. “

Why we care. If you get traffic from Google Discover, you may notice changes in that traffic in the coming days. Google recommends that if you need guidance, Google has “general guidance about core updates applies, as does our Get on Discover help page” in those help documents.

Google Ads no longer runs on keywords. It runs on intent.

5 February 2026 at 20:00
Why Google Ads auctions now run on intent, not keywords

Most PPC teams still build campaigns the same way: pull a keyword list, set match types, and organize ad groups around search terms. It’s muscle memory.

But Google’s auction no longer works that way.

Search now behaves more like a conversation than a lookup. In AI Mode, users ask follow-up questions and refine what they’re trying to solve. AI Overviews reason through an answer first, then determine which ads support that answer.

In Google Ads, the auction isn’t triggered by a keyword anymore – it’s triggered by inferred intent.

If you’re still structuring campaigns around exact and phrase match, you’re planning for a system that no longer exists. The new foundation is intent: not the words people type, but the goals behind them.

An intent-first approach gives you a more durable way to design campaigns, creative, and measurement as Google introduces new AI-driven formats.

Keywords aren’t dead, but they’re no longer the blueprint.

The mechanics under the hood have changed

Here’s what’s actually happening when someone searches now.

Google’s AI uses a technique called “query fan out,” splitting a complex question into subtopics and running multiple concurrent searches to build a comprehensive response.

The auction happens before the user even finishes typing.

And crucially, the AI infers commercial intent from purely informational queries.

For instance, someone asks, “Why is my pool green?” They’re not shopping. They’re troubleshooting.

But Google’s reasoning layer detects a problem that products can solve and serves ads for pool-cleaning supplies alongside the explanation. While the user didn’t search for a product, the AI knew they would need one.

This auction logic is fundamentally different from what we’re accustomed to. It’s not matching your keyword to the query. It’s matching your offering to the user’s inferred need state, based on conversational context. 

If your campaign structure still assumes people search in isolated, transactional moments, you’re missing the journey entirely.

Anatomy of a Google AI search query

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

What ‘intent-first’ actually means

An intent-first strategy doesn’t mean you stop doing keyword research. It means you stop treating keywords as the organizing principle.

Instead, you map campaigns to the why behind the search.

  • What problem is the user trying to solve?
  • What stage of decision-making are they in?
  • What job are they hiring your product to do?

The same intent can surface through dozens of different queries, and the same query can reflect multiple intents depending on context.

“Best CRM” could mean either “I need feature comparisons” or “I’m ready to buy and want validation.” Google’s AI now reads that difference, and your campaign structure should, too.

This is more of a mental model shift than a tactical one.

You’re still building keyword lists, but you’re grouping them by intent state rather than match type.

You’re still writing ad copy, but you’re speaking to user goals instead of echoing search terms back at them.

Get the newsletter search marketers rely on.


What changes in practice

Once campaigns are organized around intent instead of keywords, the downstream implications show up quickly – in eligibility, landing pages, and how the system learns.

Campaign eligibility

If you want to show up inside AI Overviews or AI Mode, you need broad match keywords, Performance Max, or the newer AI Max for Search campaigns.

Exact and phrase match still work for brand defense and high-visibility placements above the AI summaries, but they won’t get you into the conversational layer where exploration happens.

Landing page evolution

It’s not enough to list product features anymore. If your page explains why and how someone should use your product (not just what it is), you’re more likely to win the auction.

Google’s reasoning layer rewards contextual alignment. If the AI built an answer about solving a problem, and your page directly addresses that problem, you’re in.

Asset volume and training data

The algorithm prioritizes rich metadata, multiple high-quality images, and optimized shopping feeds with every relevant attribute filled in.

Using Customer Match lists to feed the system first-party data teaches the AI which user segments represent the highest value.

That training affects how aggressively it bids for similar users.

Dig deeper: In Google Ads automation, everything is a signal in 2026

The gaps worth knowing about

Even as intent-first campaigns unlock new reach, there are still blind spots in reporting, budget constraints, and performance expectations you need to plan around.

No reporting segmentation

Google doesn’t provide visibility into how ads perform specifically in AI Mode versus traditional search.

You’re monitoring overall cost-per-conversion and hoping high-funnel clicks convert downstream, but you can’t isolate which placements are actually driving results.

The budget barrier

AI-powered campaigns like Performance Max and AI Max need meaningful conversion volume to scale effectively, often 30 conversions in 30 days at a minimum.

Smaller advertisers with limited budgets or longer sales cycles face what some call a “scissors gap,” in which they lack the data needed to train algorithms and compete in automated auctions.

Funnel position matters

AI Mode attracts exploratory, high-funnel behavior. Conversion rates won’t match bottom-of-the-funnel branded searches. That’s expected if you’re planning for it.

It becomes a problem when you’re chasing immediate ROAS without adjusting how you define success for these placements.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

Where to start

You don’t need to rebuild everything overnight.

Pick one campaign where you suspect intent is more complex than the keywords suggest. Map it to user goal states instead of search term buckets.

Test broad match in a limited way. Rewrite one landing page to answer the “why” instead of just listing specs.

The shift to intent-first is not a tactic – it’s a lens. And it’s the most durable way to plan as Google keeps introducing new AI-driven formats.

Google says AI search is driving an ‘expansionary moment’

5 February 2026 at 19:02
Google money printing machine

Google Search is entering an “expansionary moment,” fueled by longer queries, more follow-up questions, and rising use of voice and images. That’s according to Alphabet’s executives who spoke on last night’s Q4 earnings call.

  • In other words: Google Search is shifting toward AI-driven experiences, with more conversations happening inside Google’s own interfaces.

Why we care. AI in Google Search is no longer an experiment. It’s a structural shift that’s changing how people search and reshaping discovery, visibility, and traffic across the web.

By the numbers. Alphabet’s Q4 advertising revenue totaled $82.284 billion, up 13.5% from $72.461 billion 2024:

  • Google Search & other: $63.073 billion (up 16.7%)
  • YouTube: $11.383 billion (up 8.7%)
  • Google Network: $7.828 billion ( down 1.5%)

Alphabet’s 2025 fiscal year advertising revenue totaled $294.691 billion, up 11.4% from $264.590 billion in 2024:

  • Google Search & other: $224,532 billion (up 13.4%)
  • YouTube: $40.367 billion (up 11.7%)
  • Google Network: $29.792 billion ( down 1.9%)

AI Overviews and AI Mode are now core to Search. Alphabet CEO Sundar Pichai said Google pushed aggressively on AI-powered search features in Q4, highlighting how central they’ve become to the product.

  • “We shipped over 250 product launches, within AI mode and AI overviews just last quarter,” Pichai said.

This includes Google upgrading AI Overviews to its Gemini 3 model. He said the company has tightly linked AI Overviews with conversational search.

  • “We have also made the search experience more cohesive, ensuring the transition from an AI Overview to a conversation in AI Mode is completely seamless,” Pichai said.

AI is driving more Google Search usage. Executives repeatedly described AI-driven search as additive, saying it boosts overall usage rather than replacing traditional queries.

  • “Search saw more usage in Q4 than ever before, as AI continues to drive an expansionary moment,” Pichai said.

Engagement rises once users interact with AI-powered features, Google said.

  • “Once people start using these new experiences, they use them more,” Pichai said.

Changing search behavior. Google shared new data points showing how AI Mode is changing search behavior — making queries longer, more conversational, and increasingly multimodal.

  • “Queries in AI Mode are three times longer than traditional searches,” Pichai said.

Sessions are also becoming more conversational.

  • “We are also seeing sessions become more conversational, with a significant portion of queries in AI Mode, now leading to a follow-up question,” he said.

AI Mode is also expanding beyond text.

  • “Nearly one in six AI mode queries are now non-text using voice or images,” Pichai said.

Google highlighted continued distribution of visual search capabilities, noting that:

  • “Circle to Search is now available on over 580 million Android devices,” Pichai said.

Gemini isn’t cannibalizing Search. As the Gemini app continues to grow, Google says it hasn’t seen signs that users are abandoning Search.

  • “We haven’t seen any evidence of cannibalization,” Pichai said.

Instead, Google said users move fluidly between Search, AI Overviews, AI Mode, and the Gemini app.

  • “The combination of all of that, I think, creates an expansionary moment,” Pichai said.

How AI is reshaping local search and what enterprises must do now

5 February 2026 at 19:00
Local search in the AI-first era: From rankings to recommendations in 2026

AI is no longer an experimental layer in search. It’s actively mediating how customers discover, evaluate, and choose local businesses, increasingly without a traditional search interaction. 

The real risk is data stagnation. As AI systems act on local data for users, brands that fail to adapt risk declining visibility, data inconsistencies, and loss of control over how locations are represented across AI surfaces.

Learn how AI is changing local search and what you can do to stay visible in this new landscape. 

How AI search is different from traditional search

traditional vs ai-search

We are experiencing a platform shift where machine inference, not database retrieval, drives decisions. At the same time, AI is moving beyond screens into real-world execution.

AI now powers navigation systems, in-car assistants, logistics platforms, and autonomous decision-making.

In this environment, incorrect or fragmented location data does not just degrade search.

It leads to missed turns, failed deliveries, inaccurate recommendations, and lost revenue. Brands don’t simply lose visibility. They get bypassed.

Business implications in an AI-first, zero-click decision layer 

Local search has become an AI-first, zero-click decision layer.

Multi-location brands now win or lose based on whether AI systems can confidently recommend a location as the safest, most relevant answer.

That confidence is driven by structured data quality, Google Business Profile excellence, reviews, engagement, and real-world signals such as availability and proximity.

For 2026, the enterprise risk is not experimentation. It’s inertia.

Brands that fail to industrialize and centralize local data, content, and reputation operations will see declining AI visibility, fragmented brand representation, and lost conversion opportunities without knowing why.

Paradigm shifts to understand 

Here are four key ways the growth in AI search is changing the local journey:

  • AI answers are the new front door: Local discovery increasingly starts and ends inside AI answers and Google surfaces, where users select a business directly.
  • Context beats rankings: AI weighs conversation history, user intent, location context, citations, and engagement signals, not just position.
  • Zero-click journeys dominate: Most local actions now happen on-SERP (GBP, AI Overviews, service features), making on-platform optimization mission-critical.
  • Local search in 2026 is about being chosen, not clicked: Enterprises that combine entity intelligence, operational rigor by centralizing data and creating consistency, and on-SERP conversion discipline will remain visible and preferred as AI becomes the primary decision-maker.

Businesses that don’t grasp these changes quickly won’t fall behind quietly. They’ll be algorithmically bypassed.

Dig deeper: The enterprise blueprint for winning visibility in AI search

How AI composes local results (and why it matters)

AI systems build memory through entity and context graphs. Brands with clean, connected location, service, and review data become default answers.

Local queries increasingly fall into two intent categories: objective and subjective. 

  • Objective queries focus on verifiable facts:
    • “Is the downtown branch open right now?”
    • “Do you offer same-day service?”
    • “Is this product in stock nearby?”
  • Subjective queries rely on interpretation and sentiment:
    • “Best Italian restaurant near me”
    • “Top-rated bank in Denver”
    • “Most family-friendly hotel”

This distinction matters because AI systems treat risk differently depending on intent.

For objective queries, AI models prioritize first-party sources and structured data to reduce hallucination risk. These answers often drive direct actions like calls, visits, and bookings without a traditional website visit ever occurring.

For subjective queries, AI relies more heavily on reviews, third-party commentary, and editorial consensus. This data normally comes from various other channels, such as UGC sites.  

Dig deeper: How to deploy advanced schema at scale

Source authority matters

Industry research has shown that for objective local queries, brand websites and location-level pages act as primary “truth anchors.”

When an AI system needs to confirm hours, services, amenities, or availability, it prioritizes explicit, structured core data over inferred mentions.

Consider a simple example. If a user asks, “Find a coffee shop near me that serves oat milk and is open until 9,” the AI must reason across location, inventory, and hours simultaneously.

If those facts are not clearly linked and machine-readable, the brand cannot be confidently recommended.

This is why freshness, relevance, and machine clarity, powered by entity-rich structured data, help AI systems interpret the right response. 

Set yourself up for success

Ensure your data is fresh, relevant, and clear with these tips:

  • Build a centralized entity and context graph and syndicate it consistently across GBP, listings, schema, and content.
  • Industrialize local data and entities by developing one source of truth for locations, services, attributes, inventory – continuously audited and AI-normalized.
  • Make content AI-readable and hyper-local with structured FAQs, services, and how-to content by location, optimized for conversational and multimodal queries.
  • Treat GBP as a product surface with standardized photos, services, offers, and attributes — localized and continuously optimized.
  • Operationalize reviews and reputation by implementing always-on review generation, AI-assisted responses, and sentiment intelligence feeding CX and operations.
  • Adopt AI-first measurement and governance to track AI visibility, local answer share, and on-SERP conversions — not just rankings and traffic.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

The evolution of local search from listings management to an enterprise local journey

Historically, local search was managed as a collection of disconnected tactics: listings accuracy, review monitoring, and periodic updates to location pages.

That operating model is increasingly misaligned with how local discovery now works.

Local discovery has evolved into an end-to-end enterprise journey – one that spans data integrity, experience delivery, governance, and measurement across AI-driven surfaces.

Listings, location pages, structured data, reviews, and operational workflows now work together to determine whether a brand is trusted, cited, and repeatedly surfaced by AI systems.

Introducing local 4.0

Local 4.0 is a practical operating model for AI-first local discovery at an enterprise scale. The focus of this framework is to ensure your brand is callable, verifiable, and safe for AI systems to recommend. 

To understand why this matters, it helps to look at how local has evolved:

The evolution of local
  • Local 1.0 – Listings and basic NAP consistency: The goal was presence – being indexed and included.
  • Local 2.0 – Map pack optimization and reviews: Visibility was driven by proximity, profile completeness, and reputation.
  • Local 3.0 – Location pages, content, and ROI: Local became a traffic and conversion driver tied to websites.
  • Local 4.0 – AI-mediated discovery and recommendation: Local becomes decision infrastructure, not a channel.

Local 4.0 is a new operating model for AI-first local discovery at enterprise scale. The focus is on understanding, verifying, and recommending based on consumer intent.  

  • Understandable by AI systems (clean, structured, connected data).
  • Verifiable across platforms (consistent facts, citations, reviews).
  • Safe to recommend in real-world decision contexts.

In an AI-mediated environment, brands are no longer merely present. They are selected, reused, or ignored – often without a click. This is the core transformation enterprise leaders must internalize as they plan for 2026.

Dig deeper: AI and local search: The new rules of visibility and ROI

Get the newsletter search marketers rely on.


The local 4.0 journey for enterprise brands

four step enterprise local journey

Step 1: Discovery, consistency, and control

Discovery in an AI-driven environment is fundamentally about trust. When data is inconsistent or noisy, AI systems treat it as a risk signal and deprioritize it.

Core elements include:

  • Consistency across websites, profiles, directories, and attributes.
  • Listings as verification infrastructure.
  • Location pages as primary AI data sources.
  • Structured data and indexing as the machine clarity layer.
ensuring consistency across owned channels

Why ‘legacy’ sources still matter

Listings act as verification infrastructure. Interestingly, research suggests that LLMs often cross-reference data against highly structured legacy directories (such as MapQuest or the Yellow Pages).

While human traffic to these sites has waned, AI systems utilize them as “truth anchors” because their data is rigidly structured and verified.

If your hours are wrong on MapQuest, an AI agent may downgrade its confidence in your Google Business Profile, viewing the discrepancy as a risk.

Discovery is no longer about being crawled. It’s about being trusted and reused. Governance matters because ownership, workflows, and data quality now directly affect brand risk.

Dig deeper: 4 pillars of an effective enterprise AI strategy 

Step 2: Engagement and freshness 

AI systems increasingly reward data that is current, efficiently crawled, and easy to validate.

Stale content is no longer neutral. When an AI system encounters outdated information – such as incorrect hours, closed locations, or unavailable services – it may deprioritize or avoid that entity in future recommendations.

For enterprises, freshness must be operationalized, not managed manually. This requires tightly connecting the CMS with protocols like IndexNow, so updates are discovered and reflected by AI systems in near real time.

Beyond updates, enterprises must deliberately design for local-level engagement and signal velocity. Fresh, locally relevant content – such as events, offers, service updates, and community activity – should be surfaced on location pages, structured with schema, and distributed across platforms.

In an AI-first environment, freshness is trust, and trust determines whether a location is surfaced, reused, or skipped entirely.

Unlocking ‘trapped’ data

A major challenge for enterprise brands is “trapped” data, which is vital information, often locked behind PDFs, menu images, or static event calendars.

For example, a restaurant group may upload a PDF of their monthly live music schedule. To a human, this is visible. To a search crawler, it’s often opaque. In an AI-first era, this data must be extracted and structured.

If an agent cannot read the text inside the PDF, it cannot answer the query: “Find a bar with live jazz tonight.”

Key focus areas include:

  • Continuous content freshness.
  • Efficient indexing and crawl pathways.
  • Dynamic local updates such as events, availability, and offerings.

At enterprise scale, manual workflows break. Freshness is no longer tactical. It’s a competitive requirement.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Step 3: Experience and local relevance

AI does not select the best brand. It selects the location that best resolves intent.

Generic brand messaging consistently loses out to locally curated content. AI retrieval is context-driven and prioritizes specific attributes such as parking availability, accessibility, accepted insurance, or local services.

This exposes a structural problem for many enterprises: information is fragmented across systems and teams.

Solving AI-driven relevance requires organizing data as a context graph. This means connecting services, attributes, FAQs, policies, and location details into a coherent, machine-readable system that maps to customer intent rather than departmental ownership.

Enterprises should also consider omnichannel marketing approaches to achieve consistency.   

Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement

Step 4: Measurement that executives can trust

As AI-driven and zero-click journeys increase, traditional SEO metrics lose relevance. Attribution becomes fragmented across search, maps, AI interfaces, and third-party platforms.

Precision tracking gives way to directional confidence.

Executive-level KPIs should focus on:

  • AI visibility and recommendation presence.
  • Citation accuracy and consistency.
  • Location-level actions (calls, directions, bookings).
  • Incremental revenue or lead quality lift.

The goal is not perfect attribution. It’s confidence that local discovery is working and revenue risk is being mitigated.

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

Why local 4.0 needs to be the enterprise response

Fragmentation is a material revenue risk. When local data is inconsistent or disconnected, AI systems have lower confidence in it and are less likely to reuse or recommend those locations.

Treating local data as a living, governed asset and establishing a single, authoritative source of truth early prevents incorrect information from propagating across AI-driven ecosystems and avoids the costly remediation required to fix issues after they scale.

AI-mediated discovery is now the default – and local 4.0 gives enterprises control, confidence, and competitiveness by aligning data, experience, and governance into the AI discovery flywheel.

This isn’t about chasing trends; it’s about ensuring your brand is accurately represented and confidently chosen wherever customers discover you next.

Dig deeper: How to select a CMS that powers SEO, personalization and growth

Local 4.0 is integral to the localized AI discovery flywheel

AI discovery flywheel

AI-mediated discovery is becoming the default interface between customers and local brands.

Local 4.0 provides a framework for control, confidence, and competitiveness in that environment. It aligns data, experience, and governance around how AI systems actually operate through reasoning, verification, and reuse.

This is not about chasing AI trends. It’s about ensuring your brand is correctly represented and confidently recommended wherever customers discover you next.

Why SEO teams need to ask ‘should we use AI?’ not just ‘can we?’

5 February 2026 at 18:00
Human Judgment vs Machine Output

Right now, it’s hard to find a marketing conversation that doesn’t include two letters: AI.

SEOs, strategists, and marketing leaders everywhere are asking the same question in different ways:

  • How do we use AI to cut manpower, streamline work, move faster, and boost efficiency?

Much of that thinking makes sense. If you run a business, you can’t ignore a tool that turns hours of grunt work into minutes. You’d be foolish to try.

But we’re spending too much time asking, “Can AI do this?” and not enough time asking, “Should AI do this?”

Once the initial excitement fades, some uncomfortable questions show up.

  • If every title tag, meta description, landing page, and blog post comes from AI, where does differentiation come from?
  • If every outreach email, proposal, and report is machine-generated, what happens to trust?
  • If AI agents start talking to other AI agents on our behalf, what happens to judgment, creativity, and the human side of business?

This isn’t anti-AI. I use AI. My team uses AI. You probably do, too.

This is about using AI well, using it intentionally, and not automating so much that you accidentally automate away the things that make you valuable.

What ‘automating too much’ looks like in SEO

The slippery part of automation? It rarely starts with big decisions. It starts with small ones that feel harmless.

First, you automate the boring admin. Then the repetitive writing. Then the analysis. Then client communication. Then, quietly, decision-making.

In SEO, “too much” often looks like this:

  • Meta titles and descriptions generated at scale, with barely any review.
  • Content briefs created by AI from SERP summaries, then passed straight to an AI writer for drafting.
  • On-page changes rolled out across templates because “the model recommended it.”
  • Link building outreach written by AI, sent at volume, and ignored at volume.
  • Reporting that is technically accurate but disconnected from what the business actually cares about.

If this sounds harsh, that’s because it happens fast.

The promise is always “we’ll save time.” What usually happens is you save time and lose something else. Most often, you lose the sense that your marketing has a brain behind it.

The sameness problem: if everyone uses the same tools, who wins?

This is the question I keep coming back to.

If everyone uses AI to create everything, the web fills up with content that looks and sounds the same. It might be polished. It might even be technically “good.” But it becomes interchangeable.

That creates two problems:

  • Users get bored. They read one page, then another, and it’s the same advice dressed up with slightly different words. You might win a click. You’ll struggle to build a relationship.
  • Search engines and language models still need ways to tell you apart. When content converges, the real differentiators become things like:
    • Brand recognition.
    • Original data or firsthand experience.
    • Clear expertise and accountability.
    • Signals that other people trust you.
    • Distinct angles and opinions.

The irony?

Heavy automation often strips those things out. It produces “fine” content quickly, but it also produces content that could have come from anyone.

If your goal is authority, being indistinguishable isn’t neutral. It’s a liability.

When AI starts quoting AI, reality gets blurry

This is where things start to get strange.

We’re already heading into a world where AI tools summarize content, other tools re-summarize those summaries, and someone publishes the result as if it’s new insight. It becomes a loop.

If you’ve ever asked a tool to write a blog post and it felt familiar but hard to place, that’s usually why. It isn’t creating knowledge from scratch. It’s remixing patterns.

Now imagine that happening at scale. Search engines crawl pages. Models summarize them. Businesses publish new pages based on those summaries. Agents use those pages to answer questions. Repeat.

Remove humans from the loop for too long, and you risk an internet that feels like it’s talking to itself. Plenty of words. Very little substance.

From an SEO perspective, that’s a serious problem. When the web floods with similar information, value shifts away from “who wrote the neatest explanation” and toward “who has something real to add.”

That’s why I keep coming back to the same point. The question isn’t “can AI do this?” It’s “should we use AI here, or should a human own this?”

The creativity and judgment problem

There’s a quieter risk we don’t talk about enough.

If you let AI write every proposal, every contract, every strategy deck, and every content plan, you start outsourcing judgment.

You may still be the one who clicks “generate” and “send,” but the thinking has moved somewhere else.

Over time, you lose the habit of critical thinking. Not because you can’t think, but because you stop practicing. It’s the same way GPS makes you worse at directions. You can still drive, but you stop building the skill.

In SEO, judgment is one of our most valuable assets. Knowing:

  • What to prioritize.
  • What to ignore.
  • When a dip is normal and when it is a warning sign.
  • When the data is lying because the tracking is broken.

AI can support decisions, but it can’t own them. If you automate that away, you risk becoming a delivery machine instead of a strategist. And authority doesn’t come from delivery.

The trust problem: clients do not just buy outputs

Here’s a reality check agency owners feel in their bones.

Clients don’t stay because you can do the work. They stay because they:

  • Trust you.
  • Feel looked after.
  • Believe you have their best interests at heart.
  • Like working with you.

It’s business, but it’s still human.

When you automate too much of the client experience, your service can start to feel cheap. Not in price, but in care.

  • If every email sounds generated, clients notice.
  • If every report is a generic summary with no opinion, clients notice.
  • If every deliverable looks like it came straight from a tool, clients start asking why they are paying you instead of the tool.

The same thing happens in-house. Stakeholders want confidence. They want interpretation. They want someone to say, “This is what matters, and this is what we should do next.”

AI is excellent at producing outputs. It isn’t good at reassurance, context, or accountability. Those are human services, even when the work is digital.

The accuracy and responsibility problem

If you automate content production without proper oversight, eventually you’ll publish something wrong.

Sometimes it’s small. A definition that is slightly off. A stat that is outdated. A recommendation that doesn’t fit the situation.

Sometimes it’s serious. Incorrect medical advice. Legal misinformation. Financial guidance that should never have gone live.

Even in low-risk niches, accuracy matters. When your content is wrong, trust erodes. When it’s wrong with confidence, trust disappears faster.

The more you scale AI output, the harder quality control becomes. That is where automation turns dangerous. You can produce content at speed, but you may not spot the decay until performance drops or, worse, a customer calls it out publicly.

Authority is fragile. It takes time to build and seconds to lose. Automation increases that risk because mistakes don’t stay small. They scale.

The confidentiality problem that nobody wants to admit

This is the part that often gets brushed aside in the rush to “implement AI.”

SEO and marketing work regularly involves sensitive information—sales data, customer feedback, conversion rates, pricing strategies, internal documents, and product roadmaps. Paste that into an AI tool without thinking, and you create risk.

Sometimes that risk is contractual. Sometimes it’s regulatory. Sometimes it’s reputational.

Even if your AI tools are configured securely, you still need an internal policy. Nothing fancy. Just clear rules on what can and can’t be shared, who can approve it, and how outputs are reviewed.

If you’re building authority as a brand, the last thing you want is to lose trust because you treated sensitive information casually in the name of efficiency.

The window of opportunity, and why it will not last forever

Right now, there’s a window. Most businesses are still learning how to use AI well. That gives brands that move carefully a real edge.

That window won’t stay open.

In a few years, the market will be flooded with AI-generated content and AI-assisted services. The tools will be cheaper and more accessible. The baseline will rise.

When that happens, “we use AI” won’t be a differentiator anymore. It’ll sound like saying, “we use email.”

The real differentiator will be how you use it.

Do you use AI to churn out more of the same?

Or do you use it to buy back time so you can create things others can’t?

That’s the opportunity. AI can strip out the grunt work and give you time back. What you do with that time is where authority is built.

Where SEO fits in: less doing, more directing

I suspect the SEO role is shifting.

Not away from execution entirely, but away from being valued purely for output. When a tool can generate a content draft, the value shifts to the person who can judge whether it’s the right draft — for the right audience, with the right angle, on the right page, at the right time.

In other words, the SEO becomes a director, not just a doer.

That looks like this:

  • Knowing which content is worth creating—and which isn’t.
  • Understanding the user journey and where search fits into it.
  • Building content strategies anchored in real business value.
  • Designing workflows that protect quality while increasing speed.
  • Helping teams use AI responsibly without removing human judgment.

If you’re trying to build authority, this shift is good news. It rewards expertise and judgment. It rewards people who can see the bigger picture and make decisions that go beyond “more content.”

The upside: take away the grunt work, keep the thinking

AI is excellent at certain jobs. And if we’re honest, a lot of SEO work is repetitive and draining. That’s where AI shines.

AI can help you:

  • Summarize and cluster keyword research faster.
  • Create first drafts of meta descriptions that a human then edits properly.
  • Turn messy notes into a structure you can actually work with.
  • Generate alternative title options quickly so you can choose the strongest one.
  • Create scripts for short videos or webinars from existing material.
  • Analyze patterns in performance data and flag areas worth investigating.
  • Speed up technical tasks like regex, formulas, documentation, and QA checklists.

This is the sweet spot. Use AI to reduce friction and strip out the boring work. Then spend your time on the things that actually create differentiation.

In my experience, the best use of AI in SEO isn’t replacing humans. It’s giving humans more time to do the human parts properly.

Personalization: The dream and the risk

There’s a lot of talk about personalized results. A future where each person gets answers tailored to their preferences, context, history, and intent.

That future may arrive. In some ways, it’s already here. Search results and recommendations aren’t neutral. They’re shaped by behavior and patterns.

Personalization could be great for users. It also raises the bar for brands.

If every user sees a slightly different answer, it gets harder to compete with generic content. Generic content fades into the background because it isn’t specific enough to be chosen.

That brings us back to the same truth: unique value wins. Real expertise wins. Original experience wins. Trust wins.

Automation can help you scale personalization — but only if the thinking behind it is solid. Automate personalization badly, and all you get is faster irrelevance.

A practical way to decide what should be automated

So how do we move from “can AI do this?” to “should AI do this?”

The better approach is to decide what must stay human, what can be assisted, and what can be automated safely.

These are the questions I use when making that call:

  • What happens if this is wrong? If the cost of being wrong is high, a human needs to own it.
  • Is this customer-facing? The more visible it is, the more it should sound like you and reflect your judgment.
  • Does this require empathy or nuance? If yes, automate less.
  • Does this require your unique perspective? If yes, automate less.
  • Is this reversible? If it’s easy to undo, you can afford to experiment.
  • Does it involve sensitive information? If yes, tighten control.
  • Will automation make us look like everyone else? If yes, be cautious. You may be trading speed for differentiation.

These questions are simple, but they lead to far better decisions than, “the tool can do it, so let’s do it.”

What I would and would not automate in SEO

To make this practical, here’s where I’d draw the line for most teams.

I’d happily automate or heavily assist:

  • Early-stage research, like summarizing competitors, clustering topics, and extracting themes from customer feedback.
  • Drafting tasks that a human will edit, such as meta descriptions, outlines, and first drafts of support content.
  • Repetitive admin work, including documentation, tagging, and reporting templates.
  • Technical helper tasks, like formulas, regex, and scripts—as long as a human reviews the output.

I would not fully automate:

  • Strategy: Deciding what matters and why.
  • Positioning: The angle that gives your brand a clear point of view.
  • Final customer-facing messaging: Especially anything that represents your voice and level of care.
  • Claims that require evidence: If you can’t prove it, don’t publish it.
  • Client relationships: The conversations, reassurance, and trust-building that keep people with you.

If you automate those, you may increase output, but you’ll often decrease loyalty. And loyalty is a form of authority.

The real risk is not AI. It is thoughtlessness.

The biggest risk isn’t that AI will take your job. It’s that you use it in a way that makes you replaceable.

If your brand turns into a machine that churns out generic output, it becomes hard to care.

  • Hard for search engines to prioritize.
  • Hard for language models to cite.
  • Hard for clients to justify paying for.

If you want to build authority, you have to protect what makes you different. Your judgment. Your experience. Your voice. Your evidence. Your relationships.

AI can help if you use it to create space for better thinking. It can hurt if you use it to avoid thinking altogether.

Human involvement

It’s easy to get excited about AI doing everything. Saving on headcount. Producing output 24/7. Removing bottlenecks.

But the more important question is what you lose when you remove too much human involvement. Do you lose:

  • Differentiation?
  • Trust?
  • The ability to think critically?
  • The relationships that keep clients loyal?

For most of us, the goal isn’t more marketing. The goal is marketing that works — for people we actually want to work with — in a way we can be proud of.

So yes, ask, “Can AI do this?” It’s a useful question.

Then ask, “Should AI do this?” That’s the one that protects your authority.

And if you’re unsure, start small. Automate the grunt work. Keep the thinking. Keep the voice. Keep the care.

That’s how you get the best of AI without automating away what makes you valuable.

How first-party data drives better outcomes in AI-powered advertising

5 February 2026 at 17:00

As AI-driven bidding and automation transform paid media, first-party data has become the most powerful lever advertisers control.

In this conversation with Search Engine Land, Julie Warneke, founder and CEO of Found Search Marketing, explained why first-party data now underpins profitable advertising — no matter how Google’s position on third-party cookies evolves.

What first-party data really is — and isn’t

First-party data is customer information that an advertiser owns directly, usually housed in a CRM. It includes:

  • Lead details.
  • Purchase history.
  • Revenue.
  • Customer value collected through websites, forms, or physical locations.

It doesn’t include platform-owned or browser-based data that advertisers can’t fully control.

Why first-party data matters more than ever

Digital advertising has moved from paying for impressions, to clicks, to actions — and now to outcomes. The real goal is no longer conversions alone, but profitable conversions, according to Warneke.

As AI systems process far more signals than humans can handle, advertisers who supply high-quality customer data gain a clear advantage.

CPCs may rise — but profitability can too

Rising cost-per-clicks are a fact of paid media. First-party data doesn’t always reduce CPCs, but it improves what matters more: conversion quality, revenue, and return on ad spend.

By optimizing for downstream business outcomes instead of surface-level metrics, advertisers can justify higher costs with stronger results.

How first-party data improves ROAS

When advertisers feed Google data tied to revenue and customer value, AI bidding systems can prioritize users who resemble high-value customers — often using signals far beyond demographics or geography.

The result is traffic that converts better, even if advertisers never see or control the underlying signals.

Performance Max leads the way

Among campaign types, Performance Max (PMax) currently benefits the most from first-party data activation.

PMax performs best when advertisers move away from manual optimizations and instead focus on supplying accurate, consistent data, then let the system learn, Warneke noted.

SMBs aren’t locked out — but they need the right setup

Small and mid-sized businesses aren’t disadvantaged by limited first-party data volume. Warneke shared examples of success with customer lists as small as 100 records.

The real hurdle for SMBs is infrastructure — specifically proper tracking, consent management, and reliable data pipelines.

The biggest mistakes advertisers are making

Two issues stand out:

  • Weak data capture: Many brands still depend on browser-side tracking, which increasingly fails — especially on iOS.
  • Broken feedback loops: Others upload CRM data sporadically instead of building continuous data flows that let AI systems learn and improve over time.

What marketers should do next

Warneke’s advice: Step back and audit how data is captured, stored, and sent back to platforms, then improve it incrementally.

There’s no need to overhaul everything at once or risk the entire budget. Even testing with 5–7% of spend can create a learning roadmap that delivers long-term gains.

Bottom line

AI optimizes toward the signals it receives — good or bad. Advertisers who own and refine their first-party data can shape outcomes in their favor, while those who don’t risk being optimized into inefficiency.

💾

Learn why first-party data plays an increasingly important role in how automated ad campaigns are optimized and measured.

Google Ads tightens access control with multi-party approval

4 February 2026 at 20:01
How to tell if Google Ads automation helps or hurts your campaigns

Google Ads introduced multi-party approval, a security feature that requires a second administrator to approve high-risk account actions. These actions include adding or removing users and changing user roles.

Why we care. As ad accounts grow in size and value, access control becomes a serious risk. One unauthorized, malicious, or accidental change can disrupt campaigns, permissions, or billing in minutes. Multi-party approval reduces that risk by requiring a second admin to approve high-impact actions. It adds strong protection without slowing daily work. For agencies and large teams, it prevents costly mistakes and significantly improves account security.

How it works. When an admin initiates a sensitive change, Google Ads automatically creates an approval request. Other eligible admins receive an in-product notification. One of them must approve or deny the request within 20 days. If no one responds, the request expires, and the change is blocked.

Status tracking. Each request is clearly labeled as Complete, Denied, or Expired. This makes it easy to see what was approved and what didn’t go through.

Where to find it. You can view and manage approval requests from Access and security within the Admin menu.

The bigger picture. The update reflects growing concern around account security, especially for agencies and large advertisers managing multiple users, partners, and permissions. With advertisers recently reporting costly hacks, this is a welcome update.

The Google Ads help doc. About Multi-party approval for Google Ads

In Google Ads automation, everything is a signal in 2026

4 February 2026 at 20:00
In Google Ads automation, everything is a signal in 2026

In 2015, PPC was a game of direct control. You told Google exactly which keywords to target, set manual bids at the keyword level, and capped spend with a daily budget. If you were good with spreadsheets and understood match types, you could build and manage 30,000-keyword accounts all day long.

Those days are gone.

In 2026, platform automation is no longer a helpful assistant. It’s the primary driver of performance. Fighting that reality is a losing battle. 

Automation has leveled the playing field and, in many cases, given PPC marketers back their time. But staying effective now requires a different skill set: understanding how automated systems learn and how your data shapes their decisions.

This article breaks down how signals actually work inside Google Ads, how to identify and protect high-quality signals, and how to prevent automation from drifting into the wrong pockets of performance.

Automation runs on signals, not settings

Google’s automation isn’t a black box where you drop in a budget and hope for the best. It’s a learning system that gets smarter based on the signals you provide. 

Feed it strong, accurate signals, and it will outperform any manual approach.

Feed it poor or misleading data, and it will efficiently automate failure.

That’s the real dividing line in modern PPC. AI and automation run on signals. If a system can observe, measure, or infer something, it can use it to guide bidding and targeting.

Google’s official documentation still frames “audience signals” primarily as the segments advertisers manually add to products like Performance Max or Demand Gen. 

That definition isn’t wrong, but it’s incomplete. It reflects a legacy, surface-level view of inputs and not how automation actually learns at scale.

Dig deeper: Google Ads PMax: The truth about audience signals and search themes

What actually qualifies as a signal?

In practice, every element inside a Google Ads account functions as a signal. 

Structure, assets, budgets, pacing, conversion quality, landing page behavior, feed health, and real-time query patterns all shape how the AI interprets intent and decides where your money goes. 

Nothing is neutral. Everything contributes to the model’s understanding of who you want, who you don’t, and what outcomes you value.

So when we talk about “signals,” we’re not just talking about first-party data or demographic targeting. 

We’re talking about the full ecosystem of behavioral, structural, and quality indicators that guide the algorithm’s decision-making.

Here’s what actually matters:

  • Conversion actions and values: These are 100% necessary. They tell Google Ads what defines success for your specific business and which outcomes carry the most weight for your bottom line.
  • Keyword signals: These indicate search intent. Based on research shared by Brad Geddes at a recent Paid Search Association webinar, even “low-volume” keywords serve as vital signals. They help the system understand the semantic neighborhood of your target audience.
  • Ad creative signals: This goes beyond RSA word choice. I believe the platform now analyzes the environment within your images. If you show a luxury kitchen, the algorithm identifies those visual cues to find high-end customers. I base this hypothesis on my experience running a YouTube channel. I’ve watched how the algorithm serves content based on visual environments, not just metadata.
  • Landing page signals: Beyond copy, elements like color palettes, imagery, and engagement metrics signal how well your destination aligns with the user’s initial intent. This creates a feedback loop that tells Google whether the promise of the ad was kept.
  • Bid strategies and budgets: Your bidding strategy is another core signal for the AI. It tells the system whether you’re prioritizing efficiency, volume, or raw profit. Your budget signals your level of market commitment. It tells the system how much permission it has to explore and test.

In 2026, we’ve moved beyond the daily cap mindset. With the expansion of campaign total budgets to Search and Shopping, we are now signaling a total commitment window to Google.

In the announcement, UK retailer Escentual.com used this approach to signal a fixed promotional budget, resulting in a 16% traffic lift because the AI was given permission to pace spend based on real-time demand rather than arbitrary 24-hour cycles.

All of these elements function as signals because they actively shape the ad account’s learning environment.

Anything the ad platform can observe, measure, or infer becomes part of how it predicts intent, evaluates quality, and allocates budget. 

If a component influences who sees your ads, how they behave, or what outcomes the algorithm optimizes toward, it functions as a signal.

The auction-time reality: Finding the pockets

To understand why signal quality has become critical, you need to understand what’s actually happening every time someone searches.

Google’s auction-time bidding doesn’t set one bid for “mobile users in New York.” 

It calculates a unique bid for every single auction based on billions of signal combinations at that precise millisecond. This considers the user, not simply the keyword.

We are no longer looking for “black-and-white” performance.

We are finding pockets of performance and users who are predicted to take the outcomes we define as our goals in the platform.

The AI evaluates the specific intersection of a user on iOS 17, using Chrome, in London, at 8 p.m., who previously visited your pricing page. 

Because the bidding algorithm cross-references these attributes, it generates a precise bid. This level of granularity is impossible for humans to replicate. 

But this is also the “garbage in, garbage out” reality. Without quality signals, the system is forced to guess.

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

Get the newsletter search marketers rely on.


The signal hierarchy: What Google actually listens to

If every element in a Google Ads account functions as a signal, we also have to acknowledge that not all signals carry equal weight.

Some signals shape the core of the model’s learning. Others simply refine it.

Based on my experience managing accounts spending six and seven figures monthly, this is the hierarchy that actually matters.

Conversion signals reign supreme

Your tracking is the most important data point. The algorithm needs a baseline of 30 to 50 conversions per month to recognize patterns. For B2B advertisers, this often requires shifting from high-funnel form fills to down-funnel CRM data.

As Andrea Cruz noted in her deep dive on Performance Max for B2B, optimizing for a “qualified lead” or “appointment booked” is the only way to ensure the AI doesn’t just chase cheap, irrelevant clicks.

Enhanced conversions and first-party data

We are witnessing a “death by a thousand cuts,” where browser restrictions from Safari and Firefox, coupled with aggressive global regulations, have dismantled the third-party cookie. 

Without enhanced conversions or server-side tracking, you are essentially flying blind, because the invisible trackers of the past are being replaced by a model where data must be earned through transparent value exchanges.

First-party audience signals

Your customer lists tell Google, “Here is who converted. Now go find more people like this.” 

Quality trumps quantity here. A stale or tiny list won’t be as effective as a list that is updated in real time.

Custom segments provide context

Using keywords and URLs to build segments creates a digital footprint of your ideal customer. 

This is especially critical in niche industries where Google’s prebuilt audiences are too broad or too generic.

These segments help the system understand the neighborhood your best prospects live in online.

To simplify this hierarchy, I’ve mapped out the most common signals used in 2026 by their actual weight in the bidding engine:

Signal categorySpecific input
(The “what”)
Weight/impactWhy it matters in 2026
Primary (Truth)Offline conversion imports (CRM)CriticalTrains the AI on profit, not just “leads.”
Primary (Truth)Value-based bidding (tROAS)CriticalSignals which products actually drive margin.
Secondary (Context)First-party customer match listsHighProvides a “Seed Audience” for the AI to model.
Secondary (Context)Visual environment (images/video)HighAI scans images to infer user “lifestyle” and price tier.
Tertiary (Intent)Low-volume/long-tail keywordsMediumDefines the “semantic neighborhood” of the search.
Tertiary (Intent)Landing page color and speedMediumSignals trust and relevance feedback loops.
Pollutant (Noise)“Soft” conversions (scrolls/clicks)NegativeDilutes intent. Trains AI to find “cheap clickers.”

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Beware of signal pollution

Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google’s AI uses to learn. 

It’s what happens when the system receives signals that don’t accurately represent your ideal client, your real conversion quality, or the true intent you want to attract in your ad campaigns.

Signal pollution doesn’t just “confuse” the bidding algorithm. It actively trains it in the wrong direction. 

It dilutes your high-value signals, expands your reach into low-intent audiences, and forces the model to optimize toward outcomes you don’t actually want.

Common sources include:

  • Bad conversion data, including junk leads, unqualified form fills, and misfires.
  • Overly broad structures that blend high- and low-intent traffic.
  • Creative that attracts the wrong people.
  • Landing page behavior that signals low relevance or low trust.
  • Budget or pacing patterns that imply you’re willing to pay for volume over quality.
  • Feed issues that distort product relevance.
  • Audience segments that don’t match your real buyer.

These sources create the initial pollution. But when marketers try to compensate for underperformance by feeding the machine more data, the root cause never gets addressed. 

That’s when soft conversions like scrolls or downloads get added as primary signals, and none of them correlate to revenue.

Like humans, algorithms focus on the metrics they are fed.

If you mix soft signals with high-intent revenue data, you dilute the profile of your ideal customer. 

You end up winning thousands of cheap, low-value auctions that look great in a report but fail to move the needle on the P&L. 

Your job is to be the gatekeeper, ensuring only the most profitable signals reach the bidding engine.

When signal pollution takes hold, the algorithm doesn’t just underperform. The ads start drifting toward the wrong users, and performance begins to decline. 

Before you can build a strong signal strategy, you have to understand how to spot that drift early and correct it before it compounds.

How to detect and correct algorithm drift

Algorithm drift happens when Google’s automation starts optimizing toward the wrong outcomes because the signals it’s receiving no longer match your real advertising goals. 

Drift doesn’t show up as a dramatic crash. It shows up as a slow shift in who you reach, what queries you win, and which conversions the system prioritizes. It looks like a gradual deterioration of lead quality.

To stay in control, you need a simple way to spot drift early and correct it before the machine locks in the wrong pattern.

Early warning signs of drift include:

  • A sudden rise in cheap conversions that don’t correlate with revenue.
  • A shift in search terms toward lower-intent or irrelevant queries.
  • A drop in average order value or lead quality.
  • A spike in new-user volume with no matching lift in sales.
  • A campaign that looks healthy in-platform but feels wrong in the CRM or P&L.

These are all indicators that the system is optimizing toward the wrong signals.

To correct drift without resetting learning:

  • Tighten your conversion signals: Remove soft conversions, misfires, or anything that doesn’t map to revenue. The machine can’t unlearn bad data, but you can stop feeding it.
  • Reinforce the right audience patterns:  Upload fresh customer lists, refresh custom segments, and remove stale data. Drift often comes from outdated or diluted audience signals.
  • Adjust structure to isolate intent:  If a campaign blends high- and low-intent traffic, split it. Give the ad platform a cleaner environment to relearn the right patterns.
  • Refresh creative to repel the wrong users: Creative is a signal. If the wrong people are clicking, your ads are attracting them. Update imagery, language, and value props to realign intent.
  • Let the system stabilize before making another change: After a correction, give the campaign 5-10 days to settle. Overcorrecting creates more drift.

Your job isn’t to fight automation in Google Ads, it’s to guide it. 

Drift happens when the machine is left unsupervised with weak or conflicting signals. Strong signal hygiene keeps the system aligned with your real business outcomes.

Once you can detect drift and correct it quickly, you’re finally in a position to build a signal strategy that compounds over time instead of constantly resetting.

The next step is structuring your ad account so every signal reinforces the outcomes you actually want.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Building a strategy that actually works in 2026 with signals

If you want to build a signal strategy that becomes a competitive advantage, you have to start with the foundations.

For lead gen

Implement offline conversion imports. The difference between optimizing for a “form fill” and a “$50K closed deal” is the difference between wasting budget and growing a business. 

When “journey-aware bidding” eventually rolls out, it will be a game-changer because we can feed more data about the individual steps of a sale.

For ecommerce

Use value-based bidding. Don’t just count conversions. Differentiate between a customer buying a $20 accessory and one buying a $500 hero product.

Segment your data

Don’t just dump everyone into one list. A list of 5,000 recent purchasers is worth far more than 50,000 people who visited your homepage two years ago. 

Stale data hurts performance by teaching the algorithm to find people who matched your business 18 months ago, not today.

Separate brand and nonbrand campaigns

Brand traffic carries radically different intent and conversion rates than nonbrand. 

Mixing these campaigns forces the algorithm to average two incompatible behaviors, which muddies your signals and inflates your ROAS expectations. 

Brand should be isolated so it doesn’t subsidize poor nonbrand performance or distort bidding decisions in the ad platform.

Don’t mix high-ticket and low-ticket products under one ROAS target

A $600 product and a $20 product do not behave the same in auction-time bidding. 

When you put them in the same campaign with a single 4x ROAS target, the algorithm will get confused. 

This trains the system away from your hero products and toward low-value volume.

Centralize campaigns for data density, but only when the data belongs together

Google’s automation performs best when it has enough data to be consistent and high-quality data to recognize patterns. That means fewer, stronger campaigns are better as long as the signals inside them are aligned. 

Centralize campaigns when products share similar price points, margins, audiences, and intent. Decentralize campaigns when mixing them would pollute the signal pool.

The competitive advantage of 2026

When everyone has access to the same automation, the only real advantage left is the quality of the signals you feed it. 

Your job is to protect those signals, diagnose pollution early, and correct drift before the system locks onto the wrong patterns.

Once you build a deliberate signal strategy, Google’s automation stops being a constraint and becomes leverage. You stay in the loop, and the machine does the heavy lifting.

Anthropic says Claude will remain ad-free as ChatGPT tests ads

4 February 2026 at 19:53
AI ad free vs. ad supported

Anthropic is drawing the line against advertising in AI chatbots. Claude will remain ad-free, the company said, even as rival AI platforms experiment with sponsored messages and branded placements inside conversations.

  • Ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude (for work, problem-solving, and sensitive topics), Anthropic said in a new blog post.

Why we care. Anthropic’s position removes Claude, and its user base of 30 million, from the AI advertising equation. Brands shouldn’t expect sponsored links, conversations, or responses inside Claude. Meanwhile, ChatGPT is about to give brands the opportunity to reach an estimated 800 million weekly users.

What’s happening. AI conversations are fundamentally different from search results or social feeds, where users expect a mix of organic and paid content, Anthropic said:

  • Many Claude interactions involve personal issues, complex technical work, or high-stakes thinking. Dropping ads into those moments would feel intrusive and could quietly influence responses in ways users can’t easily detect.
  • Ad incentives tend to expand over time, gradually optimizing for engagement rather than genuine usefulness.

Incentives matter. This is a business-model decision, not just a product preference, Anthropic said:

  • An ad-free assistant can focus entirely on what helps the user — even if that means a short exchange or no follow-up at all.
  • An ad-supported model, by contrast, creates pressure to surface monetizable moments or keep users engaged longer than necessary.
  • Once ads enter the system, users may start questioning whether recommendations are driven by help or by commerce.

Anthropic isn’t rejecting commerce. Claude will still help users research, compare, and buy products when they ask. The company is also exploring “agentic commerce,” where the AI completes tasks like bookings or purchases on a user’s behalf.

  • Commerce should be triggered by the user, not by advertisers, Anthropic said.
  • The same rule applies to third-party integrations like Figma or Asana. These tools will remain user-directed, not sponsored.

Super Bowl ad. Anthropic is making the argument publicly and aggressively. In a Super Bowl debut, the company mocked intrusive AI advertising by inserting fake product pitches into personal conversations. The ad closed with a clear message: “Ads are coming to AI. But not to Claude.”

  • The campaign appears to be a direct shot at OpenAI, which has announced plans to introduce ads into ChatGPT.
  • Here’s the ad:

Claude’s blog post. Claude is a space to think

OpenAI responds. OpenAI CEO Sam Altman posted some thoughts on X. Some of the highlights:

  • “…I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.
  • “I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.
  • “Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.
  • “We will continue to work hard to make even more intelligence available for lower and lower prices to our users.”

💾

Anthropic argues ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude.
❌
❌