Reading view

Reddit says 80 million people now use its search weekly

Reddit search

Eighty million people use Reddit search every week, Reddit said on its Q4 2025 earnings call last week. The increase followed a major change: Reddit merged its core search with its AI-powered Reddit Answers and began positioning the platform as a place where users can start — and finish — their searches.

  • Executives framed the move as a response to changing behavior. People are increasingly researching products and making decisions by asking questions within communities rather than relying solely on traditional search engines.
  • Reddit is betting it can keep more of that intent on-platform, rather than acting mainly as a source of links for elsewhere.

Why we care. Reddit is becoming a place where people start — and complete — their searches without ever touching Google. For brands, that means visibility on Reddit now matters as much as ranking in traditional and AI search for many buying decisions.

Reddit’s search ambitions. CEO Steve Huffman said Reddit made “significant progress” in Q4 by unifying keyword search with Reddit Answers, its AI-driven Q&A experience. Users can now move between standard search results and AI answers in a single interface, with Answers also appearing directly inside search results.

  • “Reddit is already where people go to find things,” Huffman said, adding the company wants to become an “end-to-end search destination.”
  • More than 80 million people searched Reddit weekly in Q4, up from 60 million a year earlier, as users increasingly come to the platform to research topics — not just scroll feeds or click through from Google.

Reddit Answers is growing. Reddit Answers is driving much of that growth. Huffman said Answers queries jumped from about 1 million a year ago to 15 million in Q4, while overall search usage rose sharply in parallel.

  • He said Answers performs best for open-ended questions—what to buy, watch, or try—where people want multiple perspectives instead of a single factual answer. Those queries align naturally with Reddit’s community-driven discussions.
  • Reddit is also expanding Answers beyond text. Huffman said the company is piloting “dynamic agentic search results” that include media formats, signaling a more interactive and immersive search experience ahead.

Search is a ‘big one’ for Reddit. Huffman said the company is testing new app layouts that give search prominent placement, including versions with a large, always-visible search bar at the top of the home screen.

  • COO Jennifer Wong said search and Answers represent a major opportunity, even though monetization remains early on some surfaces.
  • Wong described Reddit search behavior as “incremental and additive” to existing engagement and often tied to high-intent moments, such as researching purchases or comparing options.

AI answers make Reddit more important. Huffman also linked Reddit’s search push to its partnerships with Google and OpenAI. He said Reddit content is now the most-cited source in AI-generated answers, highlighting the platform’s growing influence on how people find information.

  • Reddit sees AI summaries as an opportunity — to move users from AI answers into Reddit communities, where they can read discussions, ask follow-up questions, and participate.
  • If someone asks “what the best speaker is,” he said, Reddit wants users to discover not just a summary, but the community where real people are actively debating the topic.

Reddit earnings. Reddit Reports Fourth Quarter and Full Year 2025 Results; Announces $1 Billion Share Repurchase Program

OpenAI starts testing ChatGPT ads

OpenAI confirmed today that it’s rolling out its first live test of ads in ChatGPT, showing sponsored messages directly inside the app for select users.

The details. The ads will appear in a clearly labeled section beneath the chat interface, not inside responses, keeping them visually separate from ChatGPT’s answers.

  • OpenAI will show ads to logged-in users on the free tier and its lower-cost Go subscription.
  • Advertisers won’t see user conversations or influence ChatGPT’s responses, even though ads will be tailored based on what OpenAI believes will be helpful to each user, the company said.

How ads are selected. During the test, OpenAI matches ads to conversation topics, past chats, and prior ad interactions.

  • For example: A user researching recipes might see ads for meal kits or grocery delivery. If multiple advertisers qualify, OpenAI shows the most relevant option first.

User controls. Users get granular controls over the experience. They can dismiss ads, view and delete separate ad history and interest data, and toggle personalization on or off.

  • Turning personalization off limits ads to the current chat.
  • Free users can also opt out of ads in exchange for fewer daily messages or upgrade to a paid plan.

Why we care. ChatGPT is one of the world’s largest consumer AI platforms. Even a limited ad rollout could mark a major shift in how conversational AI gets monetized — and how brands reach users.

Bottom line. OpenAI is officially moving into ads inside ChatGPT, testing how sponsored content can coexist with conversational AI at massive scale.

OpenAI’s announcement.Testing ads in ChatGPT (OpenAI)

Google AI Mode doesn’t favor above-the-fold content: Study

AI Mode depth doesn't matter

Google’s AI Mode isn’t more likely to cite content that appears “above the fold,” according to a study from SALT.agency, a technical SEO and content agency.

  • After analyzing more than 2,000 URLs cited in AI Mode responses, researchers found no correlation between how high text appears on a page and whether Google’s AI selects it for citation.

Pixel depth doesn’t matter. AI Mode cited text from across entire pages, including content buried thousands of pixels down.

  • Citation depth showed no meaningful relationship to visibility.
  • Average depth varied by vertical, from about 2,400 pixels in travel to 4,600 pixels in SaaS, with many citations far below the traditional “above the fold” area.

Page layout affects depth, not visibility. Templates and design choices influenced how far down the cited text appeared, but not whether it was cited.

  • Pages with large hero images or narrative layouts pushed cited text deeper, while simpler blog or FAQ-style pages surfaced citations earlier.
  • No layout type showed a visibility advantage in AI Mode.

Descriptive subheadings matter. One consistent pattern emerged: AI Mode frequently highlighted a subheading and the sentence that followed it.

  • This suggests Google uses heading structures to navigate content, then samples opening lines to assess relevance, behavior consistent with long-standing search practices, according to SALT.

What Google is likely doing. SALT believes AI Mode relies on the same fragment indexing technology Google has used for years. Pages are broken into sections, and the most relevant fragment is retrieved regardless of where it appears on the page.

What they’re saying. While the study examined only one structural factor and one AI model, the takeaway is clear: there’s no magic formula for AI Mode visibility. Dan Taylor, partner and head of innovation (organic and AI) at SALT.agency, said:

  • “Our study confirms that there is no magic template or formula for increased visibility in AI Mode responses – and that AI Mode is not more likely to cite text from ‘above the fold.’ Instead, the best approach mirrors what’s worked in search for years: create well-structured, authoritative content that genuinely addresses the needs of your ideal customers.
  • “…the data clearly debunks the idea that where the information sits within a page has an impact on whether it will be cited.”

Why we care. The findings challenge the idea that AI-specific templates or rigid page structures drive better AI Mode visibility. Chasing “AI-optimized” layouts may distract from work that actually matters.

About the research. SALT analyzed 2,318 unique URLs cited in AI Mode responses for high-value queries across travel, ecommerce, and SaaS. Using a Chrome bookmarklet and a 1920×1080 viewport, researchers recorded the vertical pixel position of the first highlighted character in each AI-cited fragment. They also cataloged layouts and elements, such as hero sections, FAQs, accordions, and tables of contents.

The study. Research: Does Structuring Your Content Improve the Chances of AI Mode Surfacing?

A preview of ChatGPT’s ad controls just surfaced

OpenAI ChatGPT ad platform

A newly discovered settings panel offers a first detailed look at how ads could work inside ChatGPT, including how personalization and privacy controls are designed.

Driving the news. Entrepreneur Juozas Kaziukėnas found a way to trigger ChatGPT’s upcoming ad settings interface. The panel repeatedly stresses that advertisers won’t see user chats, history, memories, personal details, or IP addresses.

What the settings reveal. The interface lays out a structured ad system with dedicated controls:

  • A history tab logs ads users have seen in ChatGPT.
  • An interests tab stores inferred preferences based on ad interactions and feedback.
  • Each ad includes options to hide or report it.
  • Users can delete ad history and interests separately from their general ChatGPT data.

Personalization options. Users can turn ad personalization on or off. When it’s on, ChatGPT uses saved ad history and interest signals to tailor ads. When it’s off, ads still appear but rely only on the current conversation for context.

  • There’s also an option to personalize ads using past conversations and memory features, though the interface stresses that chat content isn’t shared with advertisers. Accounts with memory disabled won’t see this option active.

Why we care. This settings panel offers the clearest view yet of how ad personalization and privacy controls could work with ChatGPT ads. It points to a system built on strict privacy boundaries. The controls suggest ads will rely on contextual signals and opt-in personalization, not deep user tracking. That shift makes creative relevance and in-conversation intent more important than traditional audience profiling for brands preparing for conversational ad environments.

The bigger picture. The discovery suggests OpenAI is building an ad system that mirrors familiar controls from major ad platforms while prioritizing clear privacy boundaries and user choice.

Bottom line. ChatGPT ads aren’t live yet, but the framework is coming into focus — pointing to a future where conversational ads come with granular privacy and personalization controls.

First seen. Kaziukėnas shared the preview of the platform on LinkedIn.

What Google and Microsoft patents teach us about GEO

https://searchengineland.com/wp-admin/post.php?post=468436&action=edit

Generative engine optimization (GEO) represents a shift from optimizing for keyword-based ranking systems to optimizing for how generative search engines interpret and assemble information. 

While the inner workings of generative AI are famously complex, patents and research papers filed by major tech companies such as Google and Microsoft provide concrete insight into the technical mechanisms underlying generative search. By analyzing these primary sources, we can move beyond speculation and into strategic action.

This article analyzes the most insightful patents to provide actionable lessons for three core pillars of GEO: query fan-out, large language model (LLM) readability, and brand context.

Why researching patents is so important for learning GEO

Patents and research papers are primary, evidence-based sources that reveal how AI search systems actually work. The knowledge gained from these sources can be used to draw concrete conclusions about how to optimize these systems. This is essential in the early stages of a new discipline such as GEO.

Patents and research papers reveal technical mechanisms and design intent. They often describe retrieval architectures, such as: 

  • Passage retrieval and ranking.
  • Retrieval-augmented generation (RAG) workflows.
  • Query processing, including query fan-out, grounding, and other components that determine which content passages LLM-based systems retrieve and cite. 

Knowing these mechanisms explains why LLM readability, chunk relevance, and brand and context signals matter.

Primary sources reduce reliance on hype and checklists. Secondary sources, such as blogs and lists, can be misleading. Patents and research papers let you verify claims and separate evidence-based tactics from marketing-driven advice.

Patents enable hypothesis-driven optimization. Understanding the technical details helps you form testable hypotheses, such as how content structure, chunking, or metadata might affect retrieval, ranking, and citation, and design small-scale experiments to validate them.

In short, patents and research papers provide the technical grounding needed to:

  • Understand why specific GEO tactics might work.
  • Test and systematize those tactics.
  • Avoid wasting effort on unproven advice.

This makes them a central resource for learning and practicing generative engine optimization and SEO. 

That’s why I’ve been researching patents for more than 10 years and founded the SEO Research Suite, the first database for GEO- and SEO-related patents and research papers.

How do you learn GEO
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why we need to differentiate when talking about GEO

In many discussions about generative engine optimization, too little distinction is made between the different goals that GEO can pursue.

One goal is improving the citability of LLMs so your content is cited more often as the source. I refer to this as LLM readability optimization.

Another goal is brand positioning for LLMs, so a brand is mentioned more often by name. I refer to this as brand context optimization.

Each of these goals relies on different optimization strategies. That’s why they must be considered separately.

Differentiating GEO

The three foundational pillars of GEO

Understanding the following three concepts is strategically critical. 

These pillars represent fundamental shifts in how machines interpret queries, process content, and understand brands, forming the foundation for advanced GEO strategies. 

They are the new rules of digital information retrieval.

LLM readability: Crafting content for AI consumption

LLM readability is the practice of optimizing content so it can be effectively processed, deconstructed, and synthesized by LLMs. 

It goes beyond human readability and includes technical factors such as: 

  • Natural language quality.
  • Logical document structure.
  • A clear information hierarchy.
  • The relevance of individual text passages, often referred to as chunks or nuggets.

Brand context: Building a cohesive digital identity

Brand context optimization moves beyond page-level optimization to focus on how AI systems synthesize information across an entire web domain. 

The goal is to build a holistic, unified characterization of a brand. This involves ensuring your overall digital presence tells a consistent and coherent story that an AI system can easily interpret.

Query fan-out: Deconstructing user intent

Query fan-out is the process by which a generative engine deconstructs a user’s initial, often ambiguous query into multiple specific subqueries, themes, or intents. 

This allows the system to gather a more comprehensive and relevant set of information from its index before synthesizing a final generated answer.

These three pillars are not theoretical. They are actively being built into the architecture of modern search, as the following patents and research papers reveal.

Patent deep dive: How generative engines understand user queries (query fan-out)

Before a generative engine can answer a question, it must first develop a clear understanding of the user’s true intent. 

The patents below describe a multi-step process designed to deconstruct ambiguity, explore topics comprehensively, and ensure the final answer aligns with a confirmed user goal rather than the initial keywords alone.

Microsoft’s ‘Deep search using large language models’: From ambiguous query to primary intent

Microsoft’s “Deep search using large language models” patent (US20250321968A1) outlines a system that prioritizes intent by confirming a user’s true goal before delivering highly relevant results. 

Instead of treating an ambiguous query as a single event, the system transforms it into a structured investigation.

The process unfolds across several key stages:

  • Initial query and grounding: The system performs a standard web search using the original query to gather context and a set of grounding results.
  • Intent generation: A first LLM analyzes the query and the grounding results to generate multiple likely intents. For a query such as “how do points systems work in Japan,” the system might generate distinct intents like “immigration points system,” “loyalty points system,” or “traffic points system.”
  • Primary intent selection: The system selects the most probable intent. This can happen automatically, by presenting options to the user for disambiguation, or by using personalization signals such as search history.
  • Alternative query generation: Once a primary intent is confirmed, a second LLM generates more specific alternative queries to explore the topic in depth. For an academic grading intent, this might include queries like “German university grading scale explained.”
  • LLM-based scoring: A final LLM scores each new search result for relevance against the primary intent rather than the original ambiguous query. This ensures only results that precisely match the confirmed goal are ranked highly.

The key insight from this patent is that search is evolving into a system that resolves ambiguity first. 

Final results are tailored to a user’s specific, confirmed goal, representing a fundamental departure from traditional keyword-based ranking.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s “thematic search” patent (US12158907B1) provides the architectural blueprint for features such as AI Overviews. The system is designed to automatically identify and organize the most important subtopics related to a query. 

It analyzes top-ranked documents, uses an LLM to generate short summary descriptions of individual passages, and then clusters those summaries to identify common themes.

The direct implication is a shift from a simple list of links to a guided exploration of a topic’s most important facets. 

This process organizes information for users and allows the engine to identify which themes consistently appear across top-ranking documents, forming a foundational layer for establishing topical consensus.

Google’s ‘thematic search’: Auto-clustering topics from top results

Google’s ‘stateful chat’: Generating queries from conversation history

The concept of synthetic queries in Google’s “Search with stateful chat” patent (US20240289407A1) reveals another layer of intent understanding. 

The system generates new, relevant queries based on a user’s entire session history rather than just the most recent input. 

By maintaining a stateful memory of the conversation, the engine can predict logical next steps and suggest follow-up queries that build on previous interactions.

The key takeaway is that queries are no longer isolated events. Instead, they’re becoming part of a continuous, context-aware dialogue. 

This evolution requires content to do more than answer a single question. It must also fit logically within a broader user journey.

Google’s ‘stateful chat’: Generating queries from conversation history

Patent deep dive: Crafting content for AI processing (LLM readability)

Once a generative engine has disambiguated user intent and fanned out the query, its next challenge is to find and evaluate content chunks that can precisely answer those subqueries. This is where machine readability becomes critical. 

The following patents and research papers show how engines evaluate content at a granular, passage-by-passage level, rewarding clarity, structure, and factual density.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

The GINGER research paper introduces a methodology for improving the factual accuracy of AI-generated responses. Its core concept involves breaking retrieved text passages into minimal, verifiable information units, referred to as nuggets.

By deconstructing complex information into atomic facts, the system can more easily trace each statement back to its source, ensuring every component of the final answer is grounded and verifiable.

The lesson from this approach is clear: Content should be structured as a collection of self-contained, fact-dense nuggets. 

Each paragraph or statement should focus on a single, provable idea, making it easier for an AI system to extract, verify, and accurately attribute that information.

The ‘nugget’ philosophy: Deconstructing content into atomic facts

Google’s span selection: Pinpointing the exact answer

Google’s “Selecting answer spans” patent (US11481646B2) describes a system that uses a multilevel neural network to identify and score specific text spans, or chunks, within a document that best answer a given question. 

The system evaluates candidate spans, computes numeric representations based on their relationship to the query, and assigns a final score to select the single most relevant passage.

The key insight is that the relevance of individual paragraphs is evaluated with intense scrutiny. This underscores the importance of content structure, particularly placing a direct, concise answer immediately after a question-style heading. 

The patent provides the technical justification for the answer-first model, a core principle of modern GEO strategy.

Google's span selection: Pinpointing the exact answer

The consensus engine: Validating answers with weighted terms

Google’s “Weighted answer terms” patent (US10019513B1) explains how search engines establish a consensus around what constitutes a correct answer.

This patent is closely associated with featured snippets, but the technology Google developed for featured snippets is one of the foundational methodologies behind passage-based retrieval used today by AI search systems to select passages for answers.

The system identifies common question phrases across the web, analyzes the text passages that follow them, and creates a weighted term vector based on terms that appear most frequently in high-quality responses. 

For a query such as “Why is the sky blue?” terms like “Rayleigh scattering” and “atmosphere” receive high weights.

The key lesson is that to be considered an accurate and authoritative source, content must incorporate the consensus terminology used by other expert sources on the topic. 

Deviating too far from this established vocabulary can cause content to be scored poorly for accuracy, even when it is factually correct.

Get the newsletter search marketers rely on.


Patent deep dive: Building your brand’s digital DNA (brand context)

While earlier patents focus on the micro level of queries and content chunks, this final piece operates at the macro level. The engine must understand not only what is being said but also who is saying it. 

This is the essence of brand context, representing a shift from optimizing individual pages to projecting a coherent brand identity across an entire domain. 

The following patent shows how AI systems are designed to interpret an entity by synthesizing information from across its full digital presence.

Google’s entity characterization: The website as a single prompt

The methodology described in Google’s “Data extraction using LLMs” patent (WO2025063948A1) outlines a system that treats an entire website as a single input to an LLM. The system scans and interprets content from multiple pages across a domain to generate a single, synthesized characterization of the entity. 

This is not a copy-and-paste summary but a new interpretation of the collected information that is better suited to an intended purpose, such as an ad or summary, while still passing quality checks that verbatim text might fail.

The patent also explains that this characterization is organized into a hierarchical graph structure with parent and leaf nodes, which has direct implications for site architecture:

Patent conceptCorresponding GEO strategy
Parent Nodes (Broad attributes like “Services”)Create broad, high-level “hub” pages for core business categories (e.g., /services/).
Leaf Nodes (Specific details like “Pricing”)Develop specific, granular “spoke” pages for detailed offerings (e.g., /services/emergency-plumbing/).

The key implication is that every page on a website contributes to a single brand narrative.

Inconsistent messaging, conflicting terminology, or unclear value propositions can cause an AI system to generate a fragmented and weak entity characterization, reducing a brand’s authority in the system’s interpretation.

Google’s entity characterization: The website as a single prompt

The GEO playbook: Actionable lessons derived from the patents

These technical documents aren’t merely theoretical. They provide a clear, actionable playbook for aligning content and digital strategy with the core mechanics of generative search. The principles revealed in these patents form a direct guide for implementation.

Principle 1: Optimize for disambiguated intent, not just keywords

Based on the “Deep Search” and “Thematic Search” patents, the focus must shift from targeting single keywords to comprehensively answering the specific, disambiguated intents a user may have.

Actionable advice 

  • For a target query, brainstorm the different possible user intents. 
  • Create distinct, highly detailed content sections or separate pages for each one, using clear, question-based headings to signal the specific intent being addressed.

Principle 2: Structure for machine readability and extraction

Synthesizing lessons from the GINGER paper, the “answer spans” patent, and LLM readability guidance, it’s clear that structure is critical for AI processing.

Actionable advice

Apply the following structural rules to your content:

  • Use the answer-first model: Structure content so the direct answer appears immediately after a question-style heading. Follow with explanation, evidence, and context.
  • Write in nuggets: Compose short, self-contained paragraphs, each focused on a single, verifiable idea. This makes each fact easier to extract and attribute.
  • Leverage structured formats: Use lists and tables whenever possible. These formats make data points and comparisons explicit and easily parsable for an LLM.
  • Employ a logical heading hierarchy: Use H1, H2, and H3 tags to create a clear topical map of the document. This hierarchy helps an AI system understand the context and scope of each section.

Principle 3: Build a unified and consistent entity narrative

Drawing directly from the “Data extraction using LLMs” patent, domainwide consistency is no longer a nice-to-have. It’s a technical requirement for building a strong brand context.

Actionable advice

  • Conduct a comprehensive content audit. 
  • Ensure mission statements, service descriptions, value propositions, and key terminology are used consistently across every page, from the homepage to blog posts to the site footer.

Principle 4: Speak the language of authoritative consensus

The “Weighted answer terms” patent shows that AI systems validate answers by comparing them against an established consensus vocabulary.

Actionable advice

  • Before writing, analyze current featured snippets, AI Overviews, and top-ranking documents for a given query. 
  • Identify recurring technical terms, specific nouns, and phrases they use. 
  • Incorporate this consensus vocabulary to signal accuracy and authority.

Principle 5: Mirror the machine’s hierarchy in your architecture

The parent-leaf node structure described in the entity characterization patent provides a direct blueprint for effective site architecture.

Actionable advice

  • Design site architecture and internal linking to reflect a logical hierarchy. Broad parent category pages should link to specific leaf detail pages. 
  • This structure makes it easier for an LLM to map brand expertise and build an accurate hierarchical graph.

These five principles aren’t isolated tactics. 

They form a single, integrated strategy in which site architecture reinforces the brand narrative, content structure enables machine extraction, and both align to answer a user’s true, disambiguated intent.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Aligning with the future of information retrieval

Patents and research papers from the world’s leading technology companies offer a clear view of the future of search. 

Generative engine optimization is fundamentally about making information machine-interpretable at two critical levels: 

  • The micro level of the individual fact, or chunk.
  • The macro level of the cohesive brand entity. 

By studying these documents, you can shift from a reactive approach of chasing algorithm updates to a proactive one of building digital assets aligned with the core principles of how generative AI understands, structures, and presents information.

Why GA4 alone can’t measure the real impact of AI SEO

Why GA4 alone can’t measure the real impact of AI SEO

If you’re relying on GA4 alone to measure the impact of AI SEO, you’re navigating with a broken compass.

Don’t misunderstand me. It’s a reasonable launch pad. But to understand how audiences discover, evaluate, and ultimately choose brands, measurement must move beyond the bounds of Google’s tooling.

SEO is a journey, not a destination. If you optimize only for attributable visits, large parts of that journey disappear from view.

Sessions are an outcome. They can’t contextualize consideration sets increasingly shaped by algorithms and AI well before a visit ever happens.

Don’t lose potential customers in the Bermuda Triangle of traditional SEO measurement. Harness the power of share of voice to steer user intent. Guide them to you by mapping your brand visibility in AI analytics.

Measuring AI visits with GA4

Links are becoming more prevalent in AI systems. Traffic is climbing. GA4 makes it easy to set up a custom report to track these sessions.

Create an exploration with “session source / medium” as the dimension and “sessions” as the metric. Then apply this regex filter on the referrer:

.*(chatgpt|openai|claude|gemini|bard|copilot|perplexity|you\.com|meta\.ai|grok|huggingface|deepseek|mistral|manus|alexaplus|edgeservices|poe).*
Measuring AI visits with GA4

Don’t be concerned if the output report is messy. That’s normal. Many AI systems send multiple sets of partial referral information. Some send none at all, so sessions appear as dark traffic.

This report is an easy first step. But don’t be fooled into thinking it can measure the impact of AI on your brand on its own.

The most viewed AI outputs – Google’s AI Overviews and AI Mode – can’t be seen here. They are attributed to either “google / organic” or “(direct) / (none),” depending on how the user accessed Google.

With these limitations, looking only at GA4 traffic from generative AI is not a holistic enough data source to understand the reality of usage by your target audience and the impact on your brand.

Other data sources are needed.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Google Search Console and Bing Webmaster Tools don’t separate AI queries

Bing Webmaster Tools technically reports Copilot data. But in the most Microsoftesque fashion, chat is combined with web metrics, obscuring the chat data and making the report ineffective for understanding the impact of generative AI.

This approach laid the foundation for Google Search Console to do the same. AI Overviews and AI Mode impressions and clicks are lumped in with Search, and the Gemini app is not included at all.

What you can do is look for more conversational-style queries using a Google Search Console regex, such as:

^(who|what|whats|when|where|wheres|why|how|which|should)\b|.*\b(benefits of|difference between|advantages|disadvantages|examples of|meaning of|guide to|vs|versus|compare|comparison|alternative|alternatives|types of|ways to|tips|pros|cons|worth it|best|top)\b.*

But this is becoming less valuable as query fan-out becomes the standard, making synthetic queries indistinguishable from human queries while inflating impression numbers.

Worse, both GSC and BWT will become increasingly myopic as websites are bypassed by MCP connections or accessed directly by AI agents.

Again, other data sources are needed.

Get the newsletter search marketers rely on.


AI agent analytics with log files

Both Google and ChatGPT offer AI agents that can browse and, with permission, convert on a human’s behalf.

When an AI agent uses a text-based browser, it can’t be tracked by cookie-based analytics.

If the agent switches to a visual browser, it often accepts cookies, 78% of the time in my testing. But this creates problems in GA4:

  • Odd engagement metrics. These are agent behaviors, not human ones.
  • An unnatural resurgence of desktop traffic. Agents use desktop browsers exclusively.
  • An uptick in Chrome. Agents run on Chromium.

On the plus side, agentic conversions are recorded, but they are attributed to direct traffic.

As a result, many SEOs are turning to bot logs, where AI agent requests can be identified. But those requests are not a headcount of humans sending agents to complete tasks.

AI agents - bot logs

When an agent renders a page in a visual browser, it fires multiple requests for every asset. CSS. JS. Images. Fonts. A bloated front end equals inflated request counts, making raw volume a vanity metric.

The insight lies not in totals, but in paths.

Most popular paths by crawler

Follow the request flow through the site to the conversion success page. If there are plenty of requests but none reach the conversion path, you know the journey is broken.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Dig deeper: How to segment traffic from LLMs in GA4

Traditional SEO reporting isn’t up to the task of tracking AI

To track the impact of AI SEO, you need to reassess your reporting. 

Its benefits extend beyond the bounds of GA4, Google Search Console, and log file analysis, all of which assume the user reached your website directly from an AI surface. That’s not required for brand value.

Many SEO tools are now adding AI tracking, or are built entirely around it. The methodology is imperfect, with chat outcomes that are probabilistic, not deterministic. It’s similar to running focus groups.

With an unbiased sample, unbiased prompts, and regular testing, the resulting trends are valuable, even if any individual response is not. They reveal the set of brands an AI system associates with a given intent, forming a consensus view of a credible consideration set.

But AI search analytics tools are not all created equal.

Make sure your tool tracks not only website citations, but also in-chat brand mentions and citations of brand assets such as social media profiles, videos, map listings, and apps.

These are no less valuable than a website link. Recognizing this reflects SEO’s growth.

As an industry, we are returning to meaningful marketing KPIs like share of voice. Understanding brand visibility for relevant intents is what ultimately drives market share.

It’s not SEO’s job to optimize a website. It’s to build a well-known, top-rated, and trusted digital brand. That is the foundation of visibility across every organic surface.

How to diagnose and fix the biggest blocker to PPC growth

Why PPC optimization fails to scale and how to find the real constraint

We’ve all been there. A client wants to scale their Google Ads account from €10,000 per month to €100,000. So, you do what any good PPC manager would do:

  • Refine your bidding strategy.
  • Test new ad copy variations.
  • Expand your keyword portfolio.
  • Optimize landing pages.
  • Improve Quality Scores.
  • Launch Performance Max campaigns.

Three months later, you’ve increased ad spend by 15%. The client is… fine with it. But you know you should be doing better.

Here’s the uncomfortable truth: Most pay-per-click (PPC) optimization work is sophisticated procrastination.

What the theory of constraints teaches us about PPC

The theory of constraints, developed by Eliyahu Goldratt for manufacturing systems, reveals something counterintuitive. Every system is limited by exactly one bottleneck at any given time.

Making your marketing team twice as efficient won’t help if production capacity is the constraint. Similarly, improving your ad copy click-through rate (CTR) by 20% won’t move the needle if your real constraint is budget approval or landing page conversion rate.

The theory demands radical focus. Identify the single weakest link and treat everything else as less important.

Applied to PPC, this means: Stop optimizing everything. Find your number one constraint. Fix only that and then move on.




7 constraints that prevent PPC scaling

In my years of managing PPC accounts, I’ve found that almost every scaling challenge falls into one of seven categories:

1. Budget

Signal: You could profitably spend more, but you’re capped by client approval.

Example: Your campaigns are profitable at €10,000 per month with room to spend €50,000, but your client won’t approve the additional budget. Sometimes it’s risk aversion, but other times it’s a cash flow issue. 

The fix: Build a business case demonstrating profitability with a higher spend. Show historical return on ad spend (ROAS), competitive benchmarks, and projected returns.

What to ignore: Avoid ad copy testing, keyword expansion, bidding optimization, and new campaigns. None of this matters if you can’t spend more money anyway.

Dig deeper: PPC campaign budgeting and bidding strategies

2. Impression share

Signal: You’re already capturing 90%+ impression share and can’t buy more traffic.

Example: You’re targeting a niche B2B market with only 1,000 relevant searches per month.

The fix: Expand to related keywords or use broader match types. Alternatively, enter new geographic markets or add complementary platforms like Microsoft Ads or LinkedIn Ads.

What to ignore: Don’t worry about bidding optimization, since you’re already buying almost all available impressions.

3. Creative

Signal: You have high impression share but low CTRs, resulting in a premium cost per click (CPC).

Example: You’re showing ads on 80% of searches, but CTR is 2% when the industry average is 5%.

The fix: Aggressively test ad copy, better message-market fit, and more compelling.

What to ignore: Avoid keyword expansion. Your ads are already visible, they just aren’t getting clicks.

4. Conversion rate

Signal: You’re generating strong traffic volume and acceptable CPC, but terrible conversion rates.

Example: You’re getting 10,000 clicks per month. But you have a 1% conversion rate when you should be getting 5%+.

The fix: Optimize landing pages, improve offers, and refine sales funnels.

What to ignore: Don’t launch more traffic campaigns. You’re already wasting the traffic you have.

5. Fulfillment

Signal: Your campaigns could generate more leads. But the client’s sales or operations team can’t handle more.

Example: You’re generating 500 leads per month, but sales can only process 100.

The fix: This is a client operations issue, not a PPC issue. Help them identify it, but know that the solution lies outside your control. Do more business consulting for your client while maintaining the current PPC level.

What to ignore: Pause all PPC optimization, as the system can’t absorb more volume.

6. Profitability

Signal: You can scale volume, but cost per acquisition (CPA) is too high to be profitable.

Example: You need €50 CPA to break even, but you’re currently at €80 CPA.

The fix: Improve unit economics through better targeting or creative optimization. Alternatively, help the client rethink their pricing or improve customer lifetime value (LTV).

What to ignore: Set aside volume tactics until the economics work at the current scale.

7. Tracking or attribution

Signal: Attribution is broken, so you can’t confidently scale the campaign.

Example: You’re seeing complex multi-touch customer journeys where you can’t definitively prove PPC’s contribution.

The fix: Implement better tracking and test different tracking stacks (e.g., server-side, fingerprinting, or cookie-based). You can also update your attribution modeling or develop first-party data capabilities.

What to ignore: Avoid scaling any channel until you know what actually drives results.

Dig deeper: How to track and measure PPC campaigns

Get the newsletter search marketers rely on.


The diagnostic framework

Identifying your constraint requires methodical analysis rather than gut feeling. Here’s how to uncover what’s holding your account back.

Run an audit

Start by benchmarking critical metrics:

  • Impression share: If you’re capturing less than 50% of available impressions, your constraint is likely budget or bids preventing you from competing effectively.
  • CTR: Performance below industry benchmarks signals a creative constraint where your messaging isn’t resonating with searchers.
  • CPC: Unusually high CPCs often indicate a Quality Score constraint, which reflects poor ad relevance or landing page experience.
  • Conversion rate: If this metric lags compared to historical performance or industry standards, your constraint is the landing page.
  • Search volume: If you’ve already captured the majority of relevant searches, your constraint is inventory exhaustion.

Don’t overlook operational metrics either. Check fulfillment capacity by determining how many leads your client’s team can handle per month.

Finally, document your approved budget against what you could profitably spend. If there’s a sizable difference, budget approval is your primary constraint.

Ask the critical question

With your audit complete, resist the temptation to create a prioritized list. Instead, force yourself to answer one question: “If I could only fix one of these metrics, which would unlock 10x growth?”

That single metric is your constraint. Everything else, regardless of how suboptimal it appears, is secondary until you’ve broken through this bottleneck.

Apply radical focus

Once you’ve identified your primary constraint, it’s time to change your entire approach. This is where marketers tend to fail. They acknowledge the constraint but continue hedging their efforts across multiple fronts.

Why constraints are dynamic (and why that’s good)

Understanding constraint theory means recognizing that bottlenecks shift as you scale.

Consider a typical scaling journey. In month one, you’re stuck at a €10,000 monthly budget despite profitable performance metrics.

Your constraint is budget, so leadership won’t approve more ad spend. You build the business case, secure approval, and immediately scale to €30,000 monthly spend.

Success, right? Not quite. You’ve just revealed the next constraint.

By month two, you’re capturing 95% of core keyword inventory. Your new constraint is impression share, as you’ve exhausted available traffic in your primary audience.

The fix is to expand to related terms and broader match types to bring new searchers into your funnel. This expansion takes you to €50,000 per month.

Month three presents a new challenge. Your expanded traffic converts at 2% while your original core traffic maintains 5% conversion rates. Your constraint has shifted to conversion rate.

The broader audience needs different messaging or a modified landing page experience. So, you focus exclusively on improving the post-click experience until conversion rate recovers to 4%. This lets you scale to €80,000 per month.

By month four, your sales team is drowning in 500 leads per month, which more than they can effectively manage. Your constraint shifts from the PPC account to fulfillment capacity. The client hires additional sales staff to handle volume, and you scale to €120,000 per month.

Each new constraint is proof you’ve graduated to the next level. Many accounts never experience the problem of fulfillment constraints because they never break through the earlier barriers of budget and inventory.

Common traps to avoid when scaling PPC

The ‘optimize everything’ approach

When you try to optimize everything, you might spend:

  • 10 hours optimizing ad copy (+0.2% CTR)
  • 10 hours improving landing page (+0.5% CVR)
  • 10 hours refining bid strategy (+3% efficiency)

After investing 30 hours, you only achieve 5% account growth.

Instead, identify the primary constraint (e.g., conversion rate).Then, invest all 30 hours in landing page optimization. Continue to monitor your conversion rate.

Shiny object syndrome

Say your budget is capped by the client at €10,000 by client. But you spend 20 hours testing Performance Max because it’s new and interesting.

After running those tests, you achieve zero scale. And your budget is still capped at €10,000.

Instead, recognize that your primary constraint is budget approval. Build a business case, secure approval, and start scaling immediately.

Analysis paralysis

If you wait for perfect Google Analytics 4 tracking before scaling,  competitors may move forward with good enough attribution.

This can mean losing six months with no scale.

Aim for 80% accurate tracking. Perfect attribution is rarely the actual constraint.

How to implement the theory of constraints in your agency or in-house team

For your next client strategy call

Don’t say: “We’ll optimize your campaigns across multiple dimensions, bidding, creative, targeting, and see what drives the best results.”

Instead, say this: “Before we optimize anything, I need to diagnose your constraint. Once I identify it, I’ll focus exclusively on fixing that bottleneck while maintaining everything else. When it’s resolved, we’ll tackle the next constraint. This is how we’ll reach your goals.”

For your team

Implement a Constraint Monday ritual. Every Monday, each account manager identifies the primary constraint for their top three accounts. The team focuses the week’s efforts on moving those specific constraints.

On Friday, review the results. Did the constraint move?

  • If yes, what’s the new constraint?
  • If not, you had the wrong diagnosis. Try again.

From tactical to strategic PPC scaling

The difference between a good PPC manager and a great one isn’t technical skill. Instead, it’s the ability to identify constraints.

Good PPC managers optimize everything and achieve incremental gains. Great PPC managers identify the one thing preventing scale and fix only that, achieving exponential gains.

When you master the theory of constraints, you stop being seen as a tactical campaign manager and start being recognized as a strategic growth partner.

You’re no longer reporting on CTR improvements and Quality Score gains. You’re diagnosing business constraints and unlocking growth that seemed impossible.

That’s the shift that transforms PPC careers and accounts.

Amanda Farley talks broken pixels and calm leadership

On episode 340 of PPC Live The Podcast, I speak to Amanda Farley, CMO of Aimclear and a multi-award-winning marketing leader, brings a mix of honesty and expertise to the PPC Live conversation. A self-described T-shaped marketer, she combines deep PPC knowledge with broad experience across social, programmatic, PR, and integrated strategy. Her journey — from owning an gallery and tattoo studio to leading award-winning global campaigns — reflects a career built on curiosity, resilience, and continuous learning.

Overcoming limiting beliefs and embracing creativity

Amanda once ran an gallery and tattoo parlor while believing she wasn’t an artist herself. Surrounded by creatives, she eventually realized her only barrier was a limiting belief. After embracing painting, she created hundreds of artworks and discovered a powerful outlet for expression.

This mindset shift mirrors marketing growth. Success isn’t just technical — it’s mental. By challenging internal doubts, marketers can unlock new skills and opportunities.

When campaign infrastructure breaks: A high-stakes lesson

Amanda recalls a global campaign where tracking infrastructure failed across every channel mid-flight. Pixels broke, data vanished, and campaigns were running blind. Multiple siloed teams and a third-party vendor slowed resolution while budgets continued to spend.

Instead of assigning blame, Amanda focused on collaboration. Her team helped rebuild tracking and uncovered deeper data architecture issues. The crisis led to stronger onboarding processes, earlier validation checks, and clearer expectations around data hygiene. In modern PPC, clean infrastructure is essential for machine learning success.

The hidden importance of PPC hygiene

Many account audits reveal the same problem: neglected fundamentals. Basic settings errors and poorly maintained audience data often hurt performance before strategy even begins.

Outdated lists and disconnected data systems weaken automation. In an machine-learning environment, strong data hygiene ensures campaigns have the quality signals they need to perform.

Why integrated marketing is no longer optional

Amanda’s background in psychology and SEO shaped her integrated approach. PPC touches landing pages, user experience, and sales processes. When conversions drop, the issue may lie outside the ad account.

Understanding the full customer journey allows marketers to diagnose problems holistically. For Amanda, integration is a practical necessity, not a buzzword.

AI, automation, and the human factor

While AI dominates industry conversations, Amanda stresses balance. Some tools are promising, but not all are ready for full deployment. Testing is essential, but human oversight remains critical.

Machines optimize patterns, but humans judge emotion, messaging, and brand fit. Marketers who study changing customer journeys can also find new opportunities to intercept audiences across channels.

Building a culture that welcomes mistakes

Amanda believes leaders act as emotional barometers. Calm investigation beats reactive blame when issues arise. Many PPC problems stem from external changes, not individual failure.

By acknowledging stress and focusing on solutions, leaders create psychological safety. This environment encourages experimentation and turns mistakes into learning opportunities.

Testing without fear in an changing landscape

Marketing is entering another experimental era with no clear rulebook. Amanda encourages teams to dedicate budget to testing and lean on professional communities for insight.

Not every experiment will succeed, but each provides data that informs smarter future decisions.

The tasmanian devil who practices yoga

Amanda describes her career as If the Tasmanian Devil Could Do Yoga — a blend of fast-paced chaos and intentional calm. It reflects modern marketing: demanding, unpredictable, and balanced by thoughtful leadership.

💾

Amanda Farley shares lessons on overcoming setbacks and balancing AI with human insight in modern marketing leadership.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About Us At Ideal Living, we believe everyone has a right to pure water, clean air, and a solid foundation for wellness. As the parent company of leading wellness brands AirDoctor and AquaTru, we help bring this mission to life daily through our award-winning, innovative, science-backed products. For over 25 years, Los Angeles-based Ideal Living […]
  • About US: Abacus Business Computer (abcPOS) is a New York City-based technology company specializing in comprehensive point-of-sale (POS) systems and integrated payment solutions. With over 30 years of industry expertise, abcPOS offers an all-in-one platform that combines POS systems, merchant services, and growth-focused marketing tools. Serving more than 6,000 businesses and supporting over 40,000 devices, […]
  • Responsibilities: Execute full on-page SEO optimization (titles, meta, internal linking, structure) Deliver Local SEO improvements (Google Business Profile optimization, citations) Perform technical SEO audits and implement clear action plans Conduct keyword research for competitive local markets Build and manage SEO content plans focused on ranking and leads Provide monthly reporting with measurable ranking + traffic […]
  • Job/Role Overview: We’re hiring a modern digital marketer who understands that today’s marketing is AI-assisted, data-driven, and constantly evolving. This role is ideal for a recent college graduate or early-career professional trained in today’s digital and AI-focused programs – not outdated marketing playbooks. If you actively use AI tools, enjoy testing ideas, and think in […]
  • Job Description Job Title: Graphic Design & Digital Marketing Specialist Location: Hybrid / Remote (Huntersville, NC preferred) Employment Type: Full Time About Everblue Everblue is a mission-driven company dedicated to transforming careers and improving organizational efficiency. We provide training, certifications, and technology-driven solutions for contractors, government agencies, and nonprofits. Our work modernizes outdated processes, enhances […]
  • 📌 Job Title: On-Page SEO Specialist 📅 Experience: 5+ Years ⏰ Schedule: 8 AM – 5 PM CST 💰 Compensation: $10-$15/hour (based on experience) 🏡 Fully Remote | Full-time Contract Position 🌟 Job Overview We’re looking for a seasoned On-Page SEO Specialist to optimize and enhance our website’s on-page SEO performance while driving multi-location performance […]
  • Job Description MID AMERICA GOLF AND MID AMERICA SPORTS CONSTRUCTION is a leading provider of Golf and Sports construction services and synthetic turf installations, specializing in high-quality residential and commercial projects. We pride ourselves on transforming spaces with durable, eco-friendly solutions that enhance aesthetics and functionality. We’re seeking a dynamic marketing professional to elevate our […]
  • About Us Would you like to be part of a fast-growing team that believes no one should have to succumb to viral-mediated cancers? Naveris, a commercial stage, precision oncology diagnostics company with facilities in Boston, MA and Durham, NC, is looking for a Senior Digital Marketing Associate team member to help us advance our mission […]
  • About the Role We’re looking for a data-driven Marketing Strategist to support leadership and assist with optimizing our paid and organic growth efforts. This role sits at the intersection of PPC strategy, SEO execution, and performance analysis—ideal for someone who loves turning insights into measurable results. You’ll be responsible for documenting, executing, and optimizing campaigns […]
  • Job Description Salary: $75,000-$90,000 Hanson is seeking a data-driven strategist to join our team as a Digital Marketing Strategist. This role bridges the gap between marketing strategy, analytics and technology to help ensure our clients websites and digital tools perform at their highest potential. Youll work closely with cross-functional teams to optimize digital experiences, drive […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Summary If you are a person that has work ethic, wants to really grow a company along with personal and financial may be your company. We are seeking a dynamic and creative Social Media and Marketing Specialist to lead our digital marketing efforts. This role involves developing and executing innovative social media strategies, managing […]
  • About Rock Salt Marketing Rock Salt Marketing was founded in 2023 by digital marketing experts that wanted to break from the industry norms by treating people right and providing the quality services that clients expect for honest fees. At Rock Salt Marketing, we prioritize our relationships with both clients and team members, and are committed […]
  • Type: Remote (Full-Time) Salary: Up to $1,500/month (MAX) Start: Immediate Responsibilities Launch and manage Meta Ads campaigns (Facebook/Instagram) Launch and manage Google Ads Search campaigns Build retargeting + conversion tracking systems Daily optimization focused on ROI and lead quality Manage multiple client accounts under performance expectations Weekly reporting with clear actions and next steps Requirements […]
  • Job Description At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain Data Unification, and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it’s needed, empowering data and analytics leaders with unparalleled […]
  • Job Description Paid Media Manager Location: Dallas, TX (In-Office) Compensation: $60,000–$65,000 base salary (commensurate with experience) About the Opportunity Symbiotic Services is partnering with a growing digital marketing agency to identify a Paid Media Manager for an in-office role in Dallas. This position is hands-on and execution-focused, supporting multiple client accounts while collaborating closely with […]

Other roles you may be interested in

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Performance Max built-in A/B testing for creative assets spotted

Why campaign-specific goals matter in Google Ads

Google is rolling out a beta feature that lets advertisers run structured A/B tests on creative assets within a single Performance Max asset group. Advertisers can split traffic between two asset sets and measure performance in a controlled experiment.

Why we care. Creative testing inside Performance Max has mostly relied on guesswork. Google’s new native A/B asset experiments bring controlled testing directly into PMax — without spinning up separate campaigns.

How it works. Advertisers choose one Performance Max campaign and asset group, then define a control asset set (existing creatives) and a treatment set (new alternatives). Shared assets can run across both versions. After setting a traffic split — such as 50/50 — the experiment runs for several weeks before advertisers apply the winning assets.

Why this helps. Running tests inside the same asset group isolates creative impact and reduces noise from structural campaign changes. The controlled split gives clearer reporting and helps teams make rollout decisions based on performance data rather than assumptions.

Early lessons. Initial testing suggests short experiments — especially under three weeks — often produce unstable results, particularly in lower-volume accounts. Longer runs and avoiding simultaneous campaign changes improve reliability.

Bottom line. Performance Max is becoming more testable. Advertisers can now validate creative decisions with built-in experiments instead of relying on trial and error.

First seen. Google Ads expert spotted the update and shared his view on LinkedIn.

Google Ads adds a diagnostics hub for data connections

Top 5 Google Ads opportunities you might be missing

Google Ads rolled out a new data source diagnostics feature in Data Manager that lets advertisers track the health of their data connections. The tool flags problems with offline conversions, CRM imports, and tagging mismatches.

How it works. A centralized dashboard assigns clear connection status labels — Excellent, Good, Needs attention, or Urgent — and surfaces actionable alerts. Advertisers can spot issues like refused credentials, formatting errors, and failed imports, alongside a run history that shows recent sync attempts and error counts.

Why we care. When conversion data breaks, campaign optimization breaks with it. Even small connection failures can quietly skew conversion tracking and weaken automated bidding. This diagnostic tool helps teams catch and fix issues early, protecting performance and reporting accuracy. If you rely on CRM imports or offline conversions, this provides a much-needed safety net.

Who benefits most. The feature is especially useful for advertisers running complex conversion pipelines, including Salesforce integrations and offline attribution setups, where small disruptions can quickly cascade into bidding and reporting issues.

The bigger picture. As automated bidding leans more heavily on accurate first-party data, visibility into data pipelines is becoming just as critical as campaign settings themselves.

Bottom line. Google Ads is giving advertisers an early warning system for data failures, helping teams fix broken connections before performance takes a hit.

First seen. The update was first spotted by digital marketer Georgi Zayakov, who shared the new option on LinkedIn.

Performance Max reporting for ecommerce: What Google is and isn’t showing you

Performance Max has come a long way since its rocky launch. Many advertisers once dismissed it as a half-baked product, but Google has spent the past 18 months fixing real issues around transparency and control. If you wrote Performance Max off before, it’s time to take another look.

Mike Ryan, head of ecommerce insights at Smarter Ecommerce, explained why at the latest SMX Next.

Taking a fresh look at Performance Max

Performance Max traces its roots to Smart Shopping campaigns, which Google rolled out with red carpet fanfare at Google Marketing Live in 2019.

Even then, industry experts warned that transparency and control would become serious issues. They were right — and only now has Google begun to address those concerns openly.

Smart Shopping marked the low point of black-box advertising in Google Ads, at least for ecommerce. It stripped away nearly every control advertisers relied on in Standard Shopping:

  • Promotional controls.
  • Modifiers.
  • Negative keywords.
  • Search terms reporting.
  • Placement reporting.
  • Channel visibility.

Over the past 18 months, Performance Max has brought most of that functionality back, either partially or in full.

Understanding Performance Max search terms

Search terms are a core signal for understanding the traffic you’re actually buying. In Performance Max, most spend typically flows to the search network, which makes search term reporting essential for meaningful optimization.

Google even introduced a Performance Max match type — something few of us ever expected to see. That’s a big deal. It delivers properly reportable data that works with the API, should be scriptable, and finally includes cost and time dimensions that were completely missing before.

Search term insights vs. campaign search term view

Google’s first move to crack open the black box was search term insights. These insights group queries into search categories — essentially prebuilt n-grams — that roll up data at a mid-level and automatically account for typos, misspellings, and variants.

The problem? The metrics are thin. There’s no cost data, which means no CPC, no ROAS, and no real way to evaluate performance.

The real breakthrough is the new campaign-level search term view, now available in both the API and the UI.

Historically, search term reporting lived at the ad group level. Since Performance Max doesn’t use ad groups, that data had nowhere to go.

Google fixed this by anchoring search terms at the campaign level instead. The result is access to far more segments and metrics — and, finally, proper reporting we can actually use.

The main limitation: this data is available only at the search network level, without separating search from shopping. That means a single search term may reflect blended performance from both formats, rather than a clean view of how each one performed.

Search theme reporting

Search themes act as a form of positive targeting in Performance Max. You can evaluate how they’re performing through the search term insights report, which includes a Source column showing whether traffic came from your URLs, your assets, or the search themes you provided.

By totaling conversion value and conversions, you can see whether your search themes are actually driving results — or just sitting idle.

There’s more good news ahead. Google appears to be working on bringing Dynamic Search Ads and AI Max reports into Performance Max. That would unlock visibility into headlines, landing pages, and the search terms triggering ads.

Search term controls and optimization

Negative keywords

Negative keywords are now fully supported in Performance Max. At launch, Google capped campaigns at 100 negatives, offered no API access, and blocked negative keyword lists—clearly positioning the feature for brand safety, not performance.

That’s changed. Negative keywords now work with the API, support shared lists, and give advertisers real control over performance.

These negatives apply across the entire search network, including both search and shopping. Brand exclusions are the exception — you can choose to apply those only to search campaigns if needed.

Brand exclusions

Performance Max doesn’t separate brand from generic traffic, and it often favors brand queries because they’re high intent and tend to perform well. Brand exclusions exist, but they can be leaky, with some brand traffic still slipping through. If you need strict control, negative keywords are the more reliable option.

Also, Performance Max — and AI Max — may aggressively bid on competitor terms. That makes brand and competitor exclusions important tools for protecting spend and shaping intent.

Optimization strategy

Here’s a simple heuristic for spotting search terms that need attention:

  • Calculate the average number of clicks it takes to generate a conversion.
  • Identify search terms with more clicks than that average but zero conversions.

Those terms have had a fair chance to perform and didn’t. They’re strong candidates for negative keywords.

That said, don’t overcorrect.

Long-tail dynamics mean a search term that doesn’t convert this month may matter next month. You’re also working with a finite set of negative keywords, so use them deliberately and prioritize the highest-impact exclusions.

Modern optimization approaches

It’s not 2018 anymore — you shouldn’t spend hours manually reviewing search terms. Automate the work instead.

Use the API for high-volume accounts, scripts for medium volume, and automated reports from the Report Editor for smaller accounts (though it still doesn’t support Performance Max).

Layer in AI for semantic review to flag irrelevant terms based on meaning and intent, then step in only for final approval. Search term reporting can be tedious, but with Google’s prebuilt n-grams and modern AI tools, there’s a smarter way to handle it.

Channels and placements reporting

Channel performance report

The channel performance report — not just for Performance Max — breaks performance out by network, including Discover, Display, Gmail, and more. It’s useful for channel visibility and understanding view-through versus click-through conversions, as well as how feed-based delivery compares to asset-driven performance.

The report includes a Sankey diagram, but it isn’t especially intuitive. The labeling is confusing and takes some decoding:

  • Search Network: Feed-based equals Shopping ads; asset-based equals RSAs and DSAs.
  • Display Network: Feed-based equals dynamic remarketing; asset-based equals responsive display ads.

Google also announced that Search Partner Network data is coming, which should add another layer of useful performance visibility.

Channel and placement controls

Unlike Demand Gen, where you can choose exactly which channels to run on, Performance Max doesn’t give you that control. You can try to influence the channel mix through your ROAS target and budget, but it’s a blunt instrument — and a slippery one at best.

Placement exclusions

The strongest control you have is excluding specific placements. Placement data is now available through the API — limited to impressions and date segments — and can also be reviewed in the Report Editor. Use this data alongside the content suitability view to spot questionable domains and spammy placements.

For YouTube, pay close attention to political and children’s content. If a placement feels irrelevant or unsafe for your brand, there’s a good chance it isn’t driving meaningful performance either.

Tools for placement review

If you run into YouTube videos in languages you don’t speak, use Google Sheets’ built-in GOOGLETRANSLATE function. It’s faster and more reliable than AI for quick translation.

You can also use AI-powered formulas in Sheets to do semantic triage on placements, not just search terms. These tools are just formulas, which means this kind of analysis is accessible to anyone.

Search Partner Network

Unfortunately, there’s no way to opt out of the Search Partner Network in Performance Max. You can exclude individual search partners, but there are limits.

Prioritize exclusions based on how questionable the placement looks and how much volume it’s receiving. Also note that Google-owned properties like YouTube and Gmail can’t be excluded.

Based on Standard Shopping data, the Search Partner Network consistently performs meaningfully worse than the Google Search Network. Excluding poor performers is recommended.

Device reporting and targeting

Creating a device report is easy — just add device as a segment in the “when and where ads showed” view. The tricky part is making decisions.

Device analysis

For deeper insight, dig into item-level performance in the Report Editor. Add device as a segment alongside item ID and product titles to see how individual products behave across devices. Also, compare competitor performance by device — you may spot meaningful differences that inform your strategy.

For example, you may perform far better on desktop than on mobile compared to competitors like Amazon, signaling either an opportunity or a risk.

Device targeting considerations

Device targeting is available in Performance Max and is easy to use, much like channel targeting in Demand Gen. But when you split campaigns by device, you also split your conversion data and volume—and that can hurt results.

Before you separate campaigns by device, consider:

  • How competition differs by device
  • Performance at the item and retail category level
  • The impact on overall data volume

Performance Max performs best with more data. Campaigns with low monthly conversion volume often miss their targets and rarely stay on pace. As more data flows through a campaign, Performance Max gets better at hitting goals and less likely to fall short.

Any gains from splitting by device can disappear if the algorithm doesn’t have enough data to learn. Only split when both resulting campaigns have enough volume to support effective machine learning.

Conclusion

Performance Max has changed dramatically since launch. With search term reporting, negative keywords, channel visibility, placement controls, and device targeting now available, advertisers have far more transparency and control than ever before.

It’s still not perfect — channel targeting limits and data fragmentation remain — but Performance Max is fundamentally different and far more manageable.

Success comes down to knowing what data you have, how to access it efficiently using modern tools like AI and automation, and when to apply controls based on performance insights and data volume needs.

Watch: PMax reporting for ecommerce: What Google is (and isn’t) showing you

💾

Explore how to make smarter use of search terms, channel and placement reports, and device-level performance to improve campaign control.

Why content that ranks can still fail AI retrieval

Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

Google & Bing don’t recommend separate markdown pages for LLMs

Representatives from both the Google Search and Bing Search teams are recommending against creating separate markdown (.md) pages for LLM purposes. The purpose is to serve one piece of content to the LLM and another piece of content to your users, which technically may be considered a form of cloaking and against Google’s policies.

The question. Lily Ray asked on Bluesky:

  • “Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots.”

Google’s response. John Mueller from Google responded saying:

  • “I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

Recently, John Mueller also called the idea stupid, saying:

  • “Converting pages to markdown is such a stupid idea. Did you know LLMs can read images? WHY NOT TURN YOUR WHOLE SITE INTO AN IMAGE?” That is of course, converting your whole site to an MD file, which is a bit extreme, to say the least.

I did collect a lot of John Mueller’s comments on this topic, over here.

Bing’s response. Fabrice Canel from Microsoft Bing responded saying:

  • “Lily: really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”

Why we care. Some of us like to look for shortcuts to perform well on search engines and now the new AI search engines and LLMs. Generally, shortcuts, if they work, only work for a limited time. Plus, these shortcuts can have an unexpected negative effect.

As Lily Ray wrote on LinkedIn:

  • “I’ve had concerns the entire time about managing duplicate content and serving different content to crawlers than to humans, which I understand might be useful for AI search but directly violates search engines’ longstanding policies about this (basically cloaking).”

Your local rankings look fine. So why are calls disappearing?

Local SEO Alligator

For many local businesses, performance looks healthier than it is.

Rank trackers still show top-three positions. Visibility reports appear steady. Yet calls and website visits from Google Business Profiles are falling — sometimes fast.

This gap is becoming a defining feature of local search today.

Rankings are holding. Visibility and performance aren’t.

The alligator has arrived in local SEO.

The visibility crisis behind stable rankings

Across multiple U.S. industries, traditional local 3-packs are being replaced — or at least supplemented — by AI-powered local packs. These layouts behave differently from the map results we’ve optimized in the past.

Analysis from Sterling Sky, based on 179 Google Business Profiles, reveals a pattern that’s hard to ignore. Clicks-to-call are dropping sharply for Jepto-managed law firms.

When AI-powered packs replace traditional listings, the landscape shifts in four critical ways:

  • Shrinking real estate: AI packs often surface only two businesses instead of three.
  • Missing call buttons: Many AI-generated summaries remove instant click-to-call options, adding friction to the customer journey.
  • Different businesses appear: The businesses shown in AI packs often don’t match those in the traditional 3-pack.
  • Accelerated monetization of local search: When paid ads are present, traditional 3-packs increasingly lose direct call and website buttons, reducing organic conversion opportunities.

A fifth issue compounds the problem:

  • Measurement blind spots: Most rank trackers don’t yet report on AI local packs. A business may rank first in a 3-pack that many users never see.

AI local packs surfaced only 32% as many unique businesses as traditional map packs in 2026, according to Sterling Sky. In 88% of the 322 markets analyzed, the total number of visible businesses declined.

At the same time, paid ads continue to take over space once reserved for organic results, signaling a clear shift toward a pay-to-play local landscape.

What Google Business Profile data shows

The same pattern appears, especially in the U.S., where Google is aggressively testing new local formats, according to GMBapi.com data. Traditional local 3-pack impressions are increasingly displaced by:

  • AI-powered local packs.
  • Paid placements inside traditional map packs: Sponsored listings now appear alongside or within the map pack, pushing organic results lower and stripping listings of call and website buttons. This breaks organic customer journeys.
  • Expanded Google Ads units: Including Local Services Ads that consume space once reserved for organic visibility.

Impression trends still fluctuate due to seasonality, market differences, and occasional API anomalies. But a much clearer signal emerges when you look at GBP actions rather than impressions.

Mentions inside AI-generated results are still counted as impressions — even when they no longer drive calls, clicks, or visits.

Some fluctuations are driven by external factors. For example, the June drop ties back to a known Google API issue. Mobile Maps impressions also appear heavily influenced by large advertisers ramping up Google Ads later in the year.

There’s no way to segment these impressions by Google Ads, organic results, or AI Mode.

Even there, however, user behaviour is changing. Interaction rates are declining, with fewer direct actions taken from local listings.

Year-on-year comparisons in the US suggest that while impression losses remain moderate and partially seasonal, GBP actions are disproportionately impacted.

As a counterfactual, data from the Dutch market — where SERP experimentation remains limited — shows far more stable action trends.

The pattern is clear. AI-driven SERP changes, expanding Google Ads, and the removal of call and website buttons from the Map Pack are shrinking organic real estate. Even when visibility looks intact, businesses have fewer chances to earn real user actions.

Local SEO is becoming an eligibility problem

Historically, local optimization centered on familiar ranking factors: proximity, relevance, prominence, reviews, citations, and engagement.

Today, another layer sits above all of them: eligibility.

Many businesses fail to appear in AI-powered local results not because they lack authority, but because Google’s systems decide they aren’t an appropriate match for the specific query context. Research from Yext and insights from practitioners like Claudia Tomina highlight the importance of alignment across three core signals:

  • Business name
  • Primary category
  • Real-world services and positioning

When these fundamentals are misaligned, businesses can be excluded from entire result types — no matter how well optimized the Google Business Profile itself may be.

How to future-proof local visibility

Surviving today’s zero-click reality means moving beyond reliance on a single, perfectly optimized Google Business Profile. Here’s your new local SEO playbook.

The eligibility gatekeeper

Failure to appear in local packs is now driven more by perceived relevance and classification than by links or review volume.

Hyper-local entity authority

AI systems cross-reference Reddit, social platforms, forums, and local directories to judge whether a business is legitimate and active. Inconsistent signals across these ecosystems quietly erode visibility.

Visual trust signals

High-quality, frequently updated photos, and increasingly video, are no longer optional. Google’s AI analyzes visual content to infer services, intent, and categorization.

Embrace the pay-to-play reality

It’s a hard truth, but Google Ads — especially Local Services Ads — are now critical to retaining prominent call buttons that organic listings are losing. A hybrid strategy that blends local SEO with paid search isn’t optional. It’s the baseline.

What this means for local search now

Local SEO is no longer a static directory exercise. Google Business Profiles still anchor local discoverability, but they now operate inside a much broader ecosystem shaped by AI validation, constant SERP experimentation, and Google’s accelerating push to monetize local search.

Discovery no longer hinges on where your GBP ranks against nearby competitors. Search systems — including Google’s AI-driven SERP features and large language models like ChatGPT and Gemini — are increasingly trying to understand what a business actually does, not just where it’s listed.

Success is no longer about being the most “optimized” profile. It’s about being widely verified, consistently active, and contextually relevant across the AI-visible ecosystem.

Our observations show little correlation between businesses that rank well in the traditional Map Pack and those favored by Google’s AI-generated local answers that are beginning to replace it. That gap creates a real opportunity for businesses willing to adapt.

In practice, this means pairing local input with central oversight.

Authentic engagement across multiple platforms, locally differentiated content, and real community signals must coexist with brand governance, data consistency, and operational scale. For single-location businesses with deep community roots, this is an advantage. Being genuinely discussed, recommended, and referenced in your local area — online and offline — gets you halfway there.

For agencies and multi-location brands, the challenge is to balance control with local nuance and ensure trusted signals extend beyond Google (e.g., Apple Maps, Tripadvisor, Yelp, Reddit, and other relevant review ecosystems). The real test is producing locally relevant content and citations at scale without losing authenticity.

Rankings may look stable. But performance increasingly lives somewhere else.

The full data. Local SEO in 2026: Why Your Rankings are Steady but Your Calls are Vanishing

Google releases February 2026 Discover core update

Google has released the February 2026 Discover core update, which focuses specifically on how content is surfaced in Google Discover.

  • “This is a broad update to our systems that surface articles in Discover,” Google wrote.

Google said the update is rolling out first to English-language users in the U.S. and will expand to all countries and languages in the coming months. The rollout may take up to two weeks to complete, Google added.

What is expected. Google said the Discover core update will improve the “experience in a few key ways,” including:

  • Showing users more locally relevant content from websites based in their country.
  • Reducing sensational content and clickbait.
  • Highlighting more in-depth, original, and timely content from sites with demonstrated expertise in a given area, based on Google’s understanding of a site’s content.

Because the update prioritizes locally relevant content, it may reduce traffic for non-U.S. websites that publish news for a U.S. audience. That impact may lessen or disappear as the update expands globally.

More details. Google added that many sites demonstrate deep knowledge across a wide range of subjects, and its systems are built to identify expertise on a topic-by-topic basis. As a result, any site can appear in Discover, whether it covers multiple areas or focuses deeply on a single topic. Google shared an example:

  • “A local news site with a dedicated gardening section could have established expertise in gardening, even though it covers other topics. In contrast, a movie review site that wrote a single article about gardening would likely not.”

Google said it will continue to “show content that’s personalized based on people’s creator and source preferences.”

During testing, Google found that “people find the Discover experience more useful and worthwhile with this update.”

Expect fluctuations. With this Discover core update, expect fluctuations in traffic from Google Discover.

  • “Some sites might see increases or decreases; many sites may see no change at all,” Google said.

Rollout. Google said it is “releasing this update to English language users in the US, and will expand it to all countries and languages in the months ahead. “

Why we care. If you get traffic from Google Discover, you may notice changes in that traffic in the coming days. Google recommends that if you need guidance, Google has “general guidance about core updates applies, as does our Get on Discover help page” in those help documents.

Google Ads no longer runs on keywords. It runs on intent.

Why Google Ads auctions now run on intent, not keywords

Most PPC teams still build campaigns the same way: pull a keyword list, set match types, and organize ad groups around search terms. It’s muscle memory.

But Google’s auction no longer works that way.

Search now behaves more like a conversation than a lookup. In AI Mode, users ask follow-up questions and refine what they’re trying to solve. AI Overviews reason through an answer first, then determine which ads support that answer.

In Google Ads, the auction isn’t triggered by a keyword anymore – it’s triggered by inferred intent.

If you’re still structuring campaigns around exact and phrase match, you’re planning for a system that no longer exists. The new foundation is intent: not the words people type, but the goals behind them.

An intent-first approach gives you a more durable way to design campaigns, creative, and measurement as Google introduces new AI-driven formats.

Keywords aren’t dead, but they’re no longer the blueprint.

The mechanics under the hood have changed

Here’s what’s actually happening when someone searches now.

Google’s AI uses a technique called “query fan out,” splitting a complex question into subtopics and running multiple concurrent searches to build a comprehensive response.

The auction happens before the user even finishes typing.

And crucially, the AI infers commercial intent from purely informational queries.

For instance, someone asks, “Why is my pool green?” They’re not shopping. They’re troubleshooting.

But Google’s reasoning layer detects a problem that products can solve and serves ads for pool-cleaning supplies alongside the explanation. While the user didn’t search for a product, the AI knew they would need one.

This auction logic is fundamentally different from what we’re accustomed to. It’s not matching your keyword to the query. It’s matching your offering to the user’s inferred need state, based on conversational context. 

If your campaign structure still assumes people search in isolated, transactional moments, you’re missing the journey entirely.

Anatomy of a Google AI search query

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

What ‘intent-first’ actually means

An intent-first strategy doesn’t mean you stop doing keyword research. It means you stop treating keywords as the organizing principle.

Instead, you map campaigns to the why behind the search.

  • What problem is the user trying to solve?
  • What stage of decision-making are they in?
  • What job are they hiring your product to do?

The same intent can surface through dozens of different queries, and the same query can reflect multiple intents depending on context.

“Best CRM” could mean either “I need feature comparisons” or “I’m ready to buy and want validation.” Google’s AI now reads that difference, and your campaign structure should, too.

This is more of a mental model shift than a tactical one.

You’re still building keyword lists, but you’re grouping them by intent state rather than match type.

You’re still writing ad copy, but you’re speaking to user goals instead of echoing search terms back at them.

Get the newsletter search marketers rely on.


What changes in practice

Once campaigns are organized around intent instead of keywords, the downstream implications show up quickly – in eligibility, landing pages, and how the system learns.

Campaign eligibility

If you want to show up inside AI Overviews or AI Mode, you need broad match keywords, Performance Max, or the newer AI Max for Search campaigns.

Exact and phrase match still work for brand defense and high-visibility placements above the AI summaries, but they won’t get you into the conversational layer where exploration happens.

Landing page evolution

It’s not enough to list product features anymore. If your page explains why and how someone should use your product (not just what it is), you’re more likely to win the auction.

Google’s reasoning layer rewards contextual alignment. If the AI built an answer about solving a problem, and your page directly addresses that problem, you’re in.

Asset volume and training data

The algorithm prioritizes rich metadata, multiple high-quality images, and optimized shopping feeds with every relevant attribute filled in.

Using Customer Match lists to feed the system first-party data teaches the AI which user segments represent the highest value.

That training affects how aggressively it bids for similar users.

Dig deeper: In Google Ads automation, everything is a signal in 2026

The gaps worth knowing about

Even as intent-first campaigns unlock new reach, there are still blind spots in reporting, budget constraints, and performance expectations you need to plan around.

No reporting segmentation

Google doesn’t provide visibility into how ads perform specifically in AI Mode versus traditional search.

You’re monitoring overall cost-per-conversion and hoping high-funnel clicks convert downstream, but you can’t isolate which placements are actually driving results.

The budget barrier

AI-powered campaigns like Performance Max and AI Max need meaningful conversion volume to scale effectively, often 30 conversions in 30 days at a minimum.

Smaller advertisers with limited budgets or longer sales cycles face what some call a “scissors gap,” in which they lack the data needed to train algorithms and compete in automated auctions.

Funnel position matters

AI Mode attracts exploratory, high-funnel behavior. Conversion rates won’t match bottom-of-the-funnel branded searches. That’s expected if you’re planning for it.

It becomes a problem when you’re chasing immediate ROAS without adjusting how you define success for these placements.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

Where to start

You don’t need to rebuild everything overnight.

Pick one campaign where you suspect intent is more complex than the keywords suggest. Map it to user goal states instead of search term buckets.

Test broad match in a limited way. Rewrite one landing page to answer the “why” instead of just listing specs.

The shift to intent-first is not a tactic – it’s a lens. And it’s the most durable way to plan as Google keeps introducing new AI-driven formats.

Google says AI search is driving an ‘expansionary moment’

Google money printing machine

Google Search is entering an “expansionary moment,” fueled by longer queries, more follow-up questions, and rising use of voice and images. That’s according to Alphabet’s executives who spoke on last night’s Q4 earnings call.

  • In other words: Google Search is shifting toward AI-driven experiences, with more conversations happening inside Google’s own interfaces.

Why we care. AI in Google Search is no longer an experiment. It’s a structural shift that’s changing how people search and reshaping discovery, visibility, and traffic across the web.

By the numbers. Alphabet’s Q4 advertising revenue totaled $82.284 billion, up 13.5% from $72.461 billion 2024:

  • Google Search & other: $63.073 billion (up 16.7%)
  • YouTube: $11.383 billion (up 8.7%)
  • Google Network: $7.828 billion ( down 1.5%)

Alphabet’s 2025 fiscal year advertising revenue totaled $294.691 billion, up 11.4% from $264.590 billion in 2024:

  • Google Search & other: $224,532 billion (up 13.4%)
  • YouTube: $40.367 billion (up 11.7%)
  • Google Network: $29.792 billion ( down 1.9%)

AI Overviews and AI Mode are now core to Search. Alphabet CEO Sundar Pichai said Google pushed aggressively on AI-powered search features in Q4, highlighting how central they’ve become to the product.

  • “We shipped over 250 product launches, within AI mode and AI overviews just last quarter,” Pichai said.

This includes Google upgrading AI Overviews to its Gemini 3 model. He said the company has tightly linked AI Overviews with conversational search.

  • “We have also made the search experience more cohesive, ensuring the transition from an AI Overview to a conversation in AI Mode is completely seamless,” Pichai said.

AI is driving more Google Search usage. Executives repeatedly described AI-driven search as additive, saying it boosts overall usage rather than replacing traditional queries.

  • “Search saw more usage in Q4 than ever before, as AI continues to drive an expansionary moment,” Pichai said.

Engagement rises once users interact with AI-powered features, Google said.

  • “Once people start using these new experiences, they use them more,” Pichai said.

Changing search behavior. Google shared new data points showing how AI Mode is changing search behavior — making queries longer, more conversational, and increasingly multimodal.

  • “Queries in AI Mode are three times longer than traditional searches,” Pichai said.

Sessions are also becoming more conversational.

  • “We are also seeing sessions become more conversational, with a significant portion of queries in AI Mode, now leading to a follow-up question,” he said.

AI Mode is also expanding beyond text.

  • “Nearly one in six AI mode queries are now non-text using voice or images,” Pichai said.

Google highlighted continued distribution of visual search capabilities, noting that:

  • “Circle to Search is now available on over 580 million Android devices,” Pichai said.

Gemini isn’t cannibalizing Search. As the Gemini app continues to grow, Google says it hasn’t seen signs that users are abandoning Search.

  • “We haven’t seen any evidence of cannibalization,” Pichai said.

Instead, Google said users move fluidly between Search, AI Overviews, AI Mode, and the Gemini app.

  • “The combination of all of that, I think, creates an expansionary moment,” Pichai said.

How AI is reshaping local search and what enterprises must do now

Local search in the AI-first era: From rankings to recommendations in 2026

AI is no longer an experimental layer in search. It’s actively mediating how customers discover, evaluate, and choose local businesses, increasingly without a traditional search interaction. 

The real risk is data stagnation. As AI systems act on local data for users, brands that fail to adapt risk declining visibility, data inconsistencies, and loss of control over how locations are represented across AI surfaces.

Learn how AI is changing local search and what you can do to stay visible in this new landscape. 

How AI search is different from traditional search

traditional vs ai-search

We are experiencing a platform shift where machine inference, not database retrieval, drives decisions. At the same time, AI is moving beyond screens into real-world execution.

AI now powers navigation systems, in-car assistants, logistics platforms, and autonomous decision-making.

In this environment, incorrect or fragmented location data does not just degrade search.

It leads to missed turns, failed deliveries, inaccurate recommendations, and lost revenue. Brands don’t simply lose visibility. They get bypassed.

Business implications in an AI-first, zero-click decision layer 

Local search has become an AI-first, zero-click decision layer.

Multi-location brands now win or lose based on whether AI systems can confidently recommend a location as the safest, most relevant answer.

That confidence is driven by structured data quality, Google Business Profile excellence, reviews, engagement, and real-world signals such as availability and proximity.

For 2026, the enterprise risk is not experimentation. It’s inertia.

Brands that fail to industrialize and centralize local data, content, and reputation operations will see declining AI visibility, fragmented brand representation, and lost conversion opportunities without knowing why.

Paradigm shifts to understand 

Here are four key ways the growth in AI search is changing the local journey:

  • AI answers are the new front door: Local discovery increasingly starts and ends inside AI answers and Google surfaces, where users select a business directly.
  • Context beats rankings: AI weighs conversation history, user intent, location context, citations, and engagement signals, not just position.
  • Zero-click journeys dominate: Most local actions now happen on-SERP (GBP, AI Overviews, service features), making on-platform optimization mission-critical.
  • Local search in 2026 is about being chosen, not clicked: Enterprises that combine entity intelligence, operational rigor by centralizing data and creating consistency, and on-SERP conversion discipline will remain visible and preferred as AI becomes the primary decision-maker.

Businesses that don’t grasp these changes quickly won’t fall behind quietly. They’ll be algorithmically bypassed.

Dig deeper: The enterprise blueprint for winning visibility in AI search

How AI composes local results (and why it matters)

AI systems build memory through entity and context graphs. Brands with clean, connected location, service, and review data become default answers.

Local queries increasingly fall into two intent categories: objective and subjective. 

  • Objective queries focus on verifiable facts:
    • “Is the downtown branch open right now?”
    • “Do you offer same-day service?”
    • “Is this product in stock nearby?”
  • Subjective queries rely on interpretation and sentiment:
    • “Best Italian restaurant near me”
    • “Top-rated bank in Denver”
    • “Most family-friendly hotel”

This distinction matters because AI systems treat risk differently depending on intent.

For objective queries, AI models prioritize first-party sources and structured data to reduce hallucination risk. These answers often drive direct actions like calls, visits, and bookings without a traditional website visit ever occurring.

For subjective queries, AI relies more heavily on reviews, third-party commentary, and editorial consensus. This data normally comes from various other channels, such as UGC sites.  

Dig deeper: How to deploy advanced schema at scale

Source authority matters

Industry research has shown that for objective local queries, brand websites and location-level pages act as primary “truth anchors.”

When an AI system needs to confirm hours, services, amenities, or availability, it prioritizes explicit, structured core data over inferred mentions.

Consider a simple example. If a user asks, “Find a coffee shop near me that serves oat milk and is open until 9,” the AI must reason across location, inventory, and hours simultaneously.

If those facts are not clearly linked and machine-readable, the brand cannot be confidently recommended.

This is why freshness, relevance, and machine clarity, powered by entity-rich structured data, help AI systems interpret the right response. 

Set yourself up for success

Ensure your data is fresh, relevant, and clear with these tips:

  • Build a centralized entity and context graph and syndicate it consistently across GBP, listings, schema, and content.
  • Industrialize local data and entities by developing one source of truth for locations, services, attributes, inventory – continuously audited and AI-normalized.
  • Make content AI-readable and hyper-local with structured FAQs, services, and how-to content by location, optimized for conversational and multimodal queries.
  • Treat GBP as a product surface with standardized photos, services, offers, and attributes — localized and continuously optimized.
  • Operationalize reviews and reputation by implementing always-on review generation, AI-assisted responses, and sentiment intelligence feeding CX and operations.
  • Adopt AI-first measurement and governance to track AI visibility, local answer share, and on-SERP conversions — not just rankings and traffic.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

The evolution of local search from listings management to an enterprise local journey

Historically, local search was managed as a collection of disconnected tactics: listings accuracy, review monitoring, and periodic updates to location pages.

That operating model is increasingly misaligned with how local discovery now works.

Local discovery has evolved into an end-to-end enterprise journey – one that spans data integrity, experience delivery, governance, and measurement across AI-driven surfaces.

Listings, location pages, structured data, reviews, and operational workflows now work together to determine whether a brand is trusted, cited, and repeatedly surfaced by AI systems.

Introducing local 4.0

Local 4.0 is a practical operating model for AI-first local discovery at an enterprise scale. The focus of this framework is to ensure your brand is callable, verifiable, and safe for AI systems to recommend. 

To understand why this matters, it helps to look at how local has evolved:

The evolution of local
  • Local 1.0 – Listings and basic NAP consistency: The goal was presence – being indexed and included.
  • Local 2.0 – Map pack optimization and reviews: Visibility was driven by proximity, profile completeness, and reputation.
  • Local 3.0 – Location pages, content, and ROI: Local became a traffic and conversion driver tied to websites.
  • Local 4.0 – AI-mediated discovery and recommendation: Local becomes decision infrastructure, not a channel.

Local 4.0 is a new operating model for AI-first local discovery at enterprise scale. The focus is on understanding, verifying, and recommending based on consumer intent.  

  • Understandable by AI systems (clean, structured, connected data).
  • Verifiable across platforms (consistent facts, citations, reviews).
  • Safe to recommend in real-world decision contexts.

In an AI-mediated environment, brands are no longer merely present. They are selected, reused, or ignored – often without a click. This is the core transformation enterprise leaders must internalize as they plan for 2026.

Dig deeper: AI and local search: The new rules of visibility and ROI

Get the newsletter search marketers rely on.


The local 4.0 journey for enterprise brands

four step enterprise local journey

Step 1: Discovery, consistency, and control

Discovery in an AI-driven environment is fundamentally about trust. When data is inconsistent or noisy, AI systems treat it as a risk signal and deprioritize it.

Core elements include:

  • Consistency across websites, profiles, directories, and attributes.
  • Listings as verification infrastructure.
  • Location pages as primary AI data sources.
  • Structured data and indexing as the machine clarity layer.
ensuring consistency across owned channels

Why ‘legacy’ sources still matter

Listings act as verification infrastructure. Interestingly, research suggests that LLMs often cross-reference data against highly structured legacy directories (such as MapQuest or the Yellow Pages).

While human traffic to these sites has waned, AI systems utilize them as “truth anchors” because their data is rigidly structured and verified.

If your hours are wrong on MapQuest, an AI agent may downgrade its confidence in your Google Business Profile, viewing the discrepancy as a risk.

Discovery is no longer about being crawled. It’s about being trusted and reused. Governance matters because ownership, workflows, and data quality now directly affect brand risk.

Dig deeper: 4 pillars of an effective enterprise AI strategy 

Step 2: Engagement and freshness 

AI systems increasingly reward data that is current, efficiently crawled, and easy to validate.

Stale content is no longer neutral. When an AI system encounters outdated information – such as incorrect hours, closed locations, or unavailable services – it may deprioritize or avoid that entity in future recommendations.

For enterprises, freshness must be operationalized, not managed manually. This requires tightly connecting the CMS with protocols like IndexNow, so updates are discovered and reflected by AI systems in near real time.

Beyond updates, enterprises must deliberately design for local-level engagement and signal velocity. Fresh, locally relevant content – such as events, offers, service updates, and community activity – should be surfaced on location pages, structured with schema, and distributed across platforms.

In an AI-first environment, freshness is trust, and trust determines whether a location is surfaced, reused, or skipped entirely.

Unlocking ‘trapped’ data

A major challenge for enterprise brands is “trapped” data, which is vital information, often locked behind PDFs, menu images, or static event calendars.

For example, a restaurant group may upload a PDF of their monthly live music schedule. To a human, this is visible. To a search crawler, it’s often opaque. In an AI-first era, this data must be extracted and structured.

If an agent cannot read the text inside the PDF, it cannot answer the query: “Find a bar with live jazz tonight.”

Key focus areas include:

  • Continuous content freshness.
  • Efficient indexing and crawl pathways.
  • Dynamic local updates such as events, availability, and offerings.

At enterprise scale, manual workflows break. Freshness is no longer tactical. It’s a competitive requirement.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Step 3: Experience and local relevance

AI does not select the best brand. It selects the location that best resolves intent.

Generic brand messaging consistently loses out to locally curated content. AI retrieval is context-driven and prioritizes specific attributes such as parking availability, accessibility, accepted insurance, or local services.

This exposes a structural problem for many enterprises: information is fragmented across systems and teams.

Solving AI-driven relevance requires organizing data as a context graph. This means connecting services, attributes, FAQs, policies, and location details into a coherent, machine-readable system that maps to customer intent rather than departmental ownership.

Enterprises should also consider omnichannel marketing approaches to achieve consistency.   

Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement

Step 4: Measurement that executives can trust

As AI-driven and zero-click journeys increase, traditional SEO metrics lose relevance. Attribution becomes fragmented across search, maps, AI interfaces, and third-party platforms.

Precision tracking gives way to directional confidence.

Executive-level KPIs should focus on:

  • AI visibility and recommendation presence.
  • Citation accuracy and consistency.
  • Location-level actions (calls, directions, bookings).
  • Incremental revenue or lead quality lift.

The goal is not perfect attribution. It’s confidence that local discovery is working and revenue risk is being mitigated.

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

Why local 4.0 needs to be the enterprise response

Fragmentation is a material revenue risk. When local data is inconsistent or disconnected, AI systems have lower confidence in it and are less likely to reuse or recommend those locations.

Treating local data as a living, governed asset and establishing a single, authoritative source of truth early prevents incorrect information from propagating across AI-driven ecosystems and avoids the costly remediation required to fix issues after they scale.

AI-mediated discovery is now the default – and local 4.0 gives enterprises control, confidence, and competitiveness by aligning data, experience, and governance into the AI discovery flywheel.

This isn’t about chasing trends; it’s about ensuring your brand is accurately represented and confidently chosen wherever customers discover you next.

Dig deeper: How to select a CMS that powers SEO, personalization and growth

Local 4.0 is integral to the localized AI discovery flywheel

AI discovery flywheel

AI-mediated discovery is becoming the default interface between customers and local brands.

Local 4.0 provides a framework for control, confidence, and competitiveness in that environment. It aligns data, experience, and governance around how AI systems actually operate through reasoning, verification, and reuse.

This is not about chasing AI trends. It’s about ensuring your brand is correctly represented and confidently recommended wherever customers discover you next.

Why SEO teams need to ask ‘should we use AI?’ not just ‘can we?’

Human Judgment vs Machine Output

Right now, it’s hard to find a marketing conversation that doesn’t include two letters: AI.

SEOs, strategists, and marketing leaders everywhere are asking the same question in different ways:

  • How do we use AI to cut manpower, streamline work, move faster, and boost efficiency?

Much of that thinking makes sense. If you run a business, you can’t ignore a tool that turns hours of grunt work into minutes. You’d be foolish to try.

But we’re spending too much time asking, “Can AI do this?” and not enough time asking, “Should AI do this?”

Once the initial excitement fades, some uncomfortable questions show up.

  • If every title tag, meta description, landing page, and blog post comes from AI, where does differentiation come from?
  • If every outreach email, proposal, and report is machine-generated, what happens to trust?
  • If AI agents start talking to other AI agents on our behalf, what happens to judgment, creativity, and the human side of business?

This isn’t anti-AI. I use AI. My team uses AI. You probably do, too.

This is about using AI well, using it intentionally, and not automating so much that you accidentally automate away the things that make you valuable.

What ‘automating too much’ looks like in SEO

The slippery part of automation? It rarely starts with big decisions. It starts with small ones that feel harmless.

First, you automate the boring admin. Then the repetitive writing. Then the analysis. Then client communication. Then, quietly, decision-making.

In SEO, “too much” often looks like this:

  • Meta titles and descriptions generated at scale, with barely any review.
  • Content briefs created by AI from SERP summaries, then passed straight to an AI writer for drafting.
  • On-page changes rolled out across templates because “the model recommended it.”
  • Link building outreach written by AI, sent at volume, and ignored at volume.
  • Reporting that is technically accurate but disconnected from what the business actually cares about.

If this sounds harsh, that’s because it happens fast.

The promise is always “we’ll save time.” What usually happens is you save time and lose something else. Most often, you lose the sense that your marketing has a brain behind it.

The sameness problem: if everyone uses the same tools, who wins?

This is the question I keep coming back to.

If everyone uses AI to create everything, the web fills up with content that looks and sounds the same. It might be polished. It might even be technically “good.” But it becomes interchangeable.

That creates two problems:

  • Users get bored. They read one page, then another, and it’s the same advice dressed up with slightly different words. You might win a click. You’ll struggle to build a relationship.
  • Search engines and language models still need ways to tell you apart. When content converges, the real differentiators become things like:
    • Brand recognition.
    • Original data or firsthand experience.
    • Clear expertise and accountability.
    • Signals that other people trust you.
    • Distinct angles and opinions.

The irony?

Heavy automation often strips those things out. It produces “fine” content quickly, but it also produces content that could have come from anyone.

If your goal is authority, being indistinguishable isn’t neutral. It’s a liability.

When AI starts quoting AI, reality gets blurry

This is where things start to get strange.

We’re already heading into a world where AI tools summarize content, other tools re-summarize those summaries, and someone publishes the result as if it’s new insight. It becomes a loop.

If you’ve ever asked a tool to write a blog post and it felt familiar but hard to place, that’s usually why. It isn’t creating knowledge from scratch. It’s remixing patterns.

Now imagine that happening at scale. Search engines crawl pages. Models summarize them. Businesses publish new pages based on those summaries. Agents use those pages to answer questions. Repeat.

Remove humans from the loop for too long, and you risk an internet that feels like it’s talking to itself. Plenty of words. Very little substance.

From an SEO perspective, that’s a serious problem. When the web floods with similar information, value shifts away from “who wrote the neatest explanation” and toward “who has something real to add.”

That’s why I keep coming back to the same point. The question isn’t “can AI do this?” It’s “should we use AI here, or should a human own this?”

The creativity and judgment problem

There’s a quieter risk we don’t talk about enough.

If you let AI write every proposal, every contract, every strategy deck, and every content plan, you start outsourcing judgment.

You may still be the one who clicks “generate” and “send,” but the thinking has moved somewhere else.

Over time, you lose the habit of critical thinking. Not because you can’t think, but because you stop practicing. It’s the same way GPS makes you worse at directions. You can still drive, but you stop building the skill.

In SEO, judgment is one of our most valuable assets. Knowing:

  • What to prioritize.
  • What to ignore.
  • When a dip is normal and when it is a warning sign.
  • When the data is lying because the tracking is broken.

AI can support decisions, but it can’t own them. If you automate that away, you risk becoming a delivery machine instead of a strategist. And authority doesn’t come from delivery.

The trust problem: clients do not just buy outputs

Here’s a reality check agency owners feel in their bones.

Clients don’t stay because you can do the work. They stay because they:

  • Trust you.
  • Feel looked after.
  • Believe you have their best interests at heart.
  • Like working with you.

It’s business, but it’s still human.

When you automate too much of the client experience, your service can start to feel cheap. Not in price, but in care.

  • If every email sounds generated, clients notice.
  • If every report is a generic summary with no opinion, clients notice.
  • If every deliverable looks like it came straight from a tool, clients start asking why they are paying you instead of the tool.

The same thing happens in-house. Stakeholders want confidence. They want interpretation. They want someone to say, “This is what matters, and this is what we should do next.”

AI is excellent at producing outputs. It isn’t good at reassurance, context, or accountability. Those are human services, even when the work is digital.

The accuracy and responsibility problem

If you automate content production without proper oversight, eventually you’ll publish something wrong.

Sometimes it’s small. A definition that is slightly off. A stat that is outdated. A recommendation that doesn’t fit the situation.

Sometimes it’s serious. Incorrect medical advice. Legal misinformation. Financial guidance that should never have gone live.

Even in low-risk niches, accuracy matters. When your content is wrong, trust erodes. When it’s wrong with confidence, trust disappears faster.

The more you scale AI output, the harder quality control becomes. That is where automation turns dangerous. You can produce content at speed, but you may not spot the decay until performance drops or, worse, a customer calls it out publicly.

Authority is fragile. It takes time to build and seconds to lose. Automation increases that risk because mistakes don’t stay small. They scale.

The confidentiality problem that nobody wants to admit

This is the part that often gets brushed aside in the rush to “implement AI.”

SEO and marketing work regularly involves sensitive information—sales data, customer feedback, conversion rates, pricing strategies, internal documents, and product roadmaps. Paste that into an AI tool without thinking, and you create risk.

Sometimes that risk is contractual. Sometimes it’s regulatory. Sometimes it’s reputational.

Even if your AI tools are configured securely, you still need an internal policy. Nothing fancy. Just clear rules on what can and can’t be shared, who can approve it, and how outputs are reviewed.

If you’re building authority as a brand, the last thing you want is to lose trust because you treated sensitive information casually in the name of efficiency.

The window of opportunity, and why it will not last forever

Right now, there’s a window. Most businesses are still learning how to use AI well. That gives brands that move carefully a real edge.

That window won’t stay open.

In a few years, the market will be flooded with AI-generated content and AI-assisted services. The tools will be cheaper and more accessible. The baseline will rise.

When that happens, “we use AI” won’t be a differentiator anymore. It’ll sound like saying, “we use email.”

The real differentiator will be how you use it.

Do you use AI to churn out more of the same?

Or do you use it to buy back time so you can create things others can’t?

That’s the opportunity. AI can strip out the grunt work and give you time back. What you do with that time is where authority is built.

Where SEO fits in: less doing, more directing

I suspect the SEO role is shifting.

Not away from execution entirely, but away from being valued purely for output. When a tool can generate a content draft, the value shifts to the person who can judge whether it’s the right draft — for the right audience, with the right angle, on the right page, at the right time.

In other words, the SEO becomes a director, not just a doer.

That looks like this:

  • Knowing which content is worth creating—and which isn’t.
  • Understanding the user journey and where search fits into it.
  • Building content strategies anchored in real business value.
  • Designing workflows that protect quality while increasing speed.
  • Helping teams use AI responsibly without removing human judgment.

If you’re trying to build authority, this shift is good news. It rewards expertise and judgment. It rewards people who can see the bigger picture and make decisions that go beyond “more content.”

The upside: take away the grunt work, keep the thinking

AI is excellent at certain jobs. And if we’re honest, a lot of SEO work is repetitive and draining. That’s where AI shines.

AI can help you:

  • Summarize and cluster keyword research faster.
  • Create first drafts of meta descriptions that a human then edits properly.
  • Turn messy notes into a structure you can actually work with.
  • Generate alternative title options quickly so you can choose the strongest one.
  • Create scripts for short videos or webinars from existing material.
  • Analyze patterns in performance data and flag areas worth investigating.
  • Speed up technical tasks like regex, formulas, documentation, and QA checklists.

This is the sweet spot. Use AI to reduce friction and strip out the boring work. Then spend your time on the things that actually create differentiation.

In my experience, the best use of AI in SEO isn’t replacing humans. It’s giving humans more time to do the human parts properly.

Personalization: The dream and the risk

There’s a lot of talk about personalized results. A future where each person gets answers tailored to their preferences, context, history, and intent.

That future may arrive. In some ways, it’s already here. Search results and recommendations aren’t neutral. They’re shaped by behavior and patterns.

Personalization could be great for users. It also raises the bar for brands.

If every user sees a slightly different answer, it gets harder to compete with generic content. Generic content fades into the background because it isn’t specific enough to be chosen.

That brings us back to the same truth: unique value wins. Real expertise wins. Original experience wins. Trust wins.

Automation can help you scale personalization — but only if the thinking behind it is solid. Automate personalization badly, and all you get is faster irrelevance.

A practical way to decide what should be automated

So how do we move from “can AI do this?” to “should AI do this?”

The better approach is to decide what must stay human, what can be assisted, and what can be automated safely.

These are the questions I use when making that call:

  • What happens if this is wrong? If the cost of being wrong is high, a human needs to own it.
  • Is this customer-facing? The more visible it is, the more it should sound like you and reflect your judgment.
  • Does this require empathy or nuance? If yes, automate less.
  • Does this require your unique perspective? If yes, automate less.
  • Is this reversible? If it’s easy to undo, you can afford to experiment.
  • Does it involve sensitive information? If yes, tighten control.
  • Will automation make us look like everyone else? If yes, be cautious. You may be trading speed for differentiation.

These questions are simple, but they lead to far better decisions than, “the tool can do it, so let’s do it.”

What I would and would not automate in SEO

To make this practical, here’s where I’d draw the line for most teams.

I’d happily automate or heavily assist:

  • Early-stage research, like summarizing competitors, clustering topics, and extracting themes from customer feedback.
  • Drafting tasks that a human will edit, such as meta descriptions, outlines, and first drafts of support content.
  • Repetitive admin work, including documentation, tagging, and reporting templates.
  • Technical helper tasks, like formulas, regex, and scripts—as long as a human reviews the output.

I would not fully automate:

  • Strategy: Deciding what matters and why.
  • Positioning: The angle that gives your brand a clear point of view.
  • Final customer-facing messaging: Especially anything that represents your voice and level of care.
  • Claims that require evidence: If you can’t prove it, don’t publish it.
  • Client relationships: The conversations, reassurance, and trust-building that keep people with you.

If you automate those, you may increase output, but you’ll often decrease loyalty. And loyalty is a form of authority.

The real risk is not AI. It is thoughtlessness.

The biggest risk isn’t that AI will take your job. It’s that you use it in a way that makes you replaceable.

If your brand turns into a machine that churns out generic output, it becomes hard to care.

  • Hard for search engines to prioritize.
  • Hard for language models to cite.
  • Hard for clients to justify paying for.

If you want to build authority, you have to protect what makes you different. Your judgment. Your experience. Your voice. Your evidence. Your relationships.

AI can help if you use it to create space for better thinking. It can hurt if you use it to avoid thinking altogether.

Human involvement

It’s easy to get excited about AI doing everything. Saving on headcount. Producing output 24/7. Removing bottlenecks.

But the more important question is what you lose when you remove too much human involvement. Do you lose:

  • Differentiation?
  • Trust?
  • The ability to think critically?
  • The relationships that keep clients loyal?

For most of us, the goal isn’t more marketing. The goal is marketing that works — for people we actually want to work with — in a way we can be proud of.

So yes, ask, “Can AI do this?” It’s a useful question.

Then ask, “Should AI do this?” That’s the one that protects your authority.

And if you’re unsure, start small. Automate the grunt work. Keep the thinking. Keep the voice. Keep the care.

That’s how you get the best of AI without automating away what makes you valuable.

How first-party data drives better outcomes in AI-powered advertising

As AI-driven bidding and automation transform paid media, first-party data has become the most powerful lever advertisers control.

In this conversation with Search Engine Land, Julie Warneke, founder and CEO of Found Search Marketing, explained why first-party data now underpins profitable advertising — no matter how Google’s position on third-party cookies evolves.

What first-party data really is — and isn’t

First-party data is customer information that an advertiser owns directly, usually housed in a CRM. It includes:

  • Lead details.
  • Purchase history.
  • Revenue.
  • Customer value collected through websites, forms, or physical locations.

It doesn’t include platform-owned or browser-based data that advertisers can’t fully control.

Why first-party data matters more than ever

Digital advertising has moved from paying for impressions, to clicks, to actions — and now to outcomes. The real goal is no longer conversions alone, but profitable conversions, according to Warneke.

As AI systems process far more signals than humans can handle, advertisers who supply high-quality customer data gain a clear advantage.

CPCs may rise — but profitability can too

Rising cost-per-clicks are a fact of paid media. First-party data doesn’t always reduce CPCs, but it improves what matters more: conversion quality, revenue, and return on ad spend.

By optimizing for downstream business outcomes instead of surface-level metrics, advertisers can justify higher costs with stronger results.

How first-party data improves ROAS

When advertisers feed Google data tied to revenue and customer value, AI bidding systems can prioritize users who resemble high-value customers — often using signals far beyond demographics or geography.

The result is traffic that converts better, even if advertisers never see or control the underlying signals.

Performance Max leads the way

Among campaign types, Performance Max (PMax) currently benefits the most from first-party data activation.

PMax performs best when advertisers move away from manual optimizations and instead focus on supplying accurate, consistent data, then let the system learn, Warneke noted.

SMBs aren’t locked out — but they need the right setup

Small and mid-sized businesses aren’t disadvantaged by limited first-party data volume. Warneke shared examples of success with customer lists as small as 100 records.

The real hurdle for SMBs is infrastructure — specifically proper tracking, consent management, and reliable data pipelines.

The biggest mistakes advertisers are making

Two issues stand out:

  • Weak data capture: Many brands still depend on browser-side tracking, which increasingly fails — especially on iOS.
  • Broken feedback loops: Others upload CRM data sporadically instead of building continuous data flows that let AI systems learn and improve over time.

What marketers should do next

Warneke’s advice: Step back and audit how data is captured, stored, and sent back to platforms, then improve it incrementally.

There’s no need to overhaul everything at once or risk the entire budget. Even testing with 5–7% of spend can create a learning roadmap that delivers long-term gains.

Bottom line

AI optimizes toward the signals it receives — good or bad. Advertisers who own and refine their first-party data can shape outcomes in their favor, while those who don’t risk being optimized into inefficiency.

💾

Learn why first-party data plays an increasingly important role in how automated ad campaigns are optimized and measured.

Google Ads tightens access control with multi-party approval

How to tell if Google Ads automation helps or hurts your campaigns

Google Ads introduced multi-party approval, a security feature that requires a second administrator to approve high-risk account actions. These actions include adding or removing users and changing user roles.

Why we care. As ad accounts grow in size and value, access control becomes a serious risk. One unauthorized, malicious, or accidental change can disrupt campaigns, permissions, or billing in minutes. Multi-party approval reduces that risk by requiring a second admin to approve high-impact actions. It adds strong protection without slowing daily work. For agencies and large teams, it prevents costly mistakes and significantly improves account security.

How it works. When an admin initiates a sensitive change, Google Ads automatically creates an approval request. Other eligible admins receive an in-product notification. One of them must approve or deny the request within 20 days. If no one responds, the request expires, and the change is blocked.

Status tracking. Each request is clearly labeled as Complete, Denied, or Expired. This makes it easy to see what was approved and what didn’t go through.

Where to find it. You can view and manage approval requests from Access and security within the Admin menu.

The bigger picture. The update reflects growing concern around account security, especially for agencies and large advertisers managing multiple users, partners, and permissions. With advertisers recently reporting costly hacks, this is a welcome update.

The Google Ads help doc. About Multi-party approval for Google Ads

In Google Ads automation, everything is a signal in 2026

In Google Ads automation, everything is a signal in 2026

In 2015, PPC was a game of direct control. You told Google exactly which keywords to target, set manual bids at the keyword level, and capped spend with a daily budget. If you were good with spreadsheets and understood match types, you could build and manage 30,000-keyword accounts all day long.

Those days are gone.

In 2026, platform automation is no longer a helpful assistant. It’s the primary driver of performance. Fighting that reality is a losing battle. 

Automation has leveled the playing field and, in many cases, given PPC marketers back their time. But staying effective now requires a different skill set: understanding how automated systems learn and how your data shapes their decisions.

This article breaks down how signals actually work inside Google Ads, how to identify and protect high-quality signals, and how to prevent automation from drifting into the wrong pockets of performance.

Automation runs on signals, not settings

Google’s automation isn’t a black box where you drop in a budget and hope for the best. It’s a learning system that gets smarter based on the signals you provide. 

Feed it strong, accurate signals, and it will outperform any manual approach.

Feed it poor or misleading data, and it will efficiently automate failure.

That’s the real dividing line in modern PPC. AI and automation run on signals. If a system can observe, measure, or infer something, it can use it to guide bidding and targeting.

Google’s official documentation still frames “audience signals” primarily as the segments advertisers manually add to products like Performance Max or Demand Gen. 

That definition isn’t wrong, but it’s incomplete. It reflects a legacy, surface-level view of inputs and not how automation actually learns at scale.

Dig deeper: Google Ads PMax: The truth about audience signals and search themes

What actually qualifies as a signal?

In practice, every element inside a Google Ads account functions as a signal. 

Structure, assets, budgets, pacing, conversion quality, landing page behavior, feed health, and real-time query patterns all shape how the AI interprets intent and decides where your money goes. 

Nothing is neutral. Everything contributes to the model’s understanding of who you want, who you don’t, and what outcomes you value.

So when we talk about “signals,” we’re not just talking about first-party data or demographic targeting. 

We’re talking about the full ecosystem of behavioral, structural, and quality indicators that guide the algorithm’s decision-making.

Here’s what actually matters:

  • Conversion actions and values: These are 100% necessary. They tell Google Ads what defines success for your specific business and which outcomes carry the most weight for your bottom line.
  • Keyword signals: These indicate search intent. Based on research shared by Brad Geddes at a recent Paid Search Association webinar, even “low-volume” keywords serve as vital signals. They help the system understand the semantic neighborhood of your target audience.
  • Ad creative signals: This goes beyond RSA word choice. I believe the platform now analyzes the environment within your images. If you show a luxury kitchen, the algorithm identifies those visual cues to find high-end customers. I base this hypothesis on my experience running a YouTube channel. I’ve watched how the algorithm serves content based on visual environments, not just metadata.
  • Landing page signals: Beyond copy, elements like color palettes, imagery, and engagement metrics signal how well your destination aligns with the user’s initial intent. This creates a feedback loop that tells Google whether the promise of the ad was kept.
  • Bid strategies and budgets: Your bidding strategy is another core signal for the AI. It tells the system whether you’re prioritizing efficiency, volume, or raw profit. Your budget signals your level of market commitment. It tells the system how much permission it has to explore and test.

In 2026, we’ve moved beyond the daily cap mindset. With the expansion of campaign total budgets to Search and Shopping, we are now signaling a total commitment window to Google.

In the announcement, UK retailer Escentual.com used this approach to signal a fixed promotional budget, resulting in a 16% traffic lift because the AI was given permission to pace spend based on real-time demand rather than arbitrary 24-hour cycles.

All of these elements function as signals because they actively shape the ad account’s learning environment.

Anything the ad platform can observe, measure, or infer becomes part of how it predicts intent, evaluates quality, and allocates budget. 

If a component influences who sees your ads, how they behave, or what outcomes the algorithm optimizes toward, it functions as a signal.

The auction-time reality: Finding the pockets

To understand why signal quality has become critical, you need to understand what’s actually happening every time someone searches.

Google’s auction-time bidding doesn’t set one bid for “mobile users in New York.” 

It calculates a unique bid for every single auction based on billions of signal combinations at that precise millisecond. This considers the user, not simply the keyword.

We are no longer looking for “black-and-white” performance.

We are finding pockets of performance and users who are predicted to take the outcomes we define as our goals in the platform.

The AI evaluates the specific intersection of a user on iOS 17, using Chrome, in London, at 8 p.m., who previously visited your pricing page. 

Because the bidding algorithm cross-references these attributes, it generates a precise bid. This level of granularity is impossible for humans to replicate. 

But this is also the “garbage in, garbage out” reality. Without quality signals, the system is forced to guess.

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

Get the newsletter search marketers rely on.


The signal hierarchy: What Google actually listens to

If every element in a Google Ads account functions as a signal, we also have to acknowledge that not all signals carry equal weight.

Some signals shape the core of the model’s learning. Others simply refine it.

Based on my experience managing accounts spending six and seven figures monthly, this is the hierarchy that actually matters.

Conversion signals reign supreme

Your tracking is the most important data point. The algorithm needs a baseline of 30 to 50 conversions per month to recognize patterns. For B2B advertisers, this often requires shifting from high-funnel form fills to down-funnel CRM data.

As Andrea Cruz noted in her deep dive on Performance Max for B2B, optimizing for a “qualified lead” or “appointment booked” is the only way to ensure the AI doesn’t just chase cheap, irrelevant clicks.

Enhanced conversions and first-party data

We are witnessing a “death by a thousand cuts,” where browser restrictions from Safari and Firefox, coupled with aggressive global regulations, have dismantled the third-party cookie. 

Without enhanced conversions or server-side tracking, you are essentially flying blind, because the invisible trackers of the past are being replaced by a model where data must be earned through transparent value exchanges.

First-party audience signals

Your customer lists tell Google, “Here is who converted. Now go find more people like this.” 

Quality trumps quantity here. A stale or tiny list won’t be as effective as a list that is updated in real time.

Custom segments provide context

Using keywords and URLs to build segments creates a digital footprint of your ideal customer. 

This is especially critical in niche industries where Google’s prebuilt audiences are too broad or too generic.

These segments help the system understand the neighborhood your best prospects live in online.

To simplify this hierarchy, I’ve mapped out the most common signals used in 2026 by their actual weight in the bidding engine:

Signal categorySpecific input
(The “what”)
Weight/impactWhy it matters in 2026
Primary (Truth)Offline conversion imports (CRM)CriticalTrains the AI on profit, not just “leads.”
Primary (Truth)Value-based bidding (tROAS)CriticalSignals which products actually drive margin.
Secondary (Context)First-party customer match listsHighProvides a “Seed Audience” for the AI to model.
Secondary (Context)Visual environment (images/video)HighAI scans images to infer user “lifestyle” and price tier.
Tertiary (Intent)Low-volume/long-tail keywordsMediumDefines the “semantic neighborhood” of the search.
Tertiary (Intent)Landing page color and speedMediumSignals trust and relevance feedback loops.
Pollutant (Noise)“Soft” conversions (scrolls/clicks)NegativeDilutes intent. Trains AI to find “cheap clickers.”

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Beware of signal pollution

Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google’s AI uses to learn. 

It’s what happens when the system receives signals that don’t accurately represent your ideal client, your real conversion quality, or the true intent you want to attract in your ad campaigns.

Signal pollution doesn’t just “confuse” the bidding algorithm. It actively trains it in the wrong direction. 

It dilutes your high-value signals, expands your reach into low-intent audiences, and forces the model to optimize toward outcomes you don’t actually want.

Common sources include:

  • Bad conversion data, including junk leads, unqualified form fills, and misfires.
  • Overly broad structures that blend high- and low-intent traffic.
  • Creative that attracts the wrong people.
  • Landing page behavior that signals low relevance or low trust.
  • Budget or pacing patterns that imply you’re willing to pay for volume over quality.
  • Feed issues that distort product relevance.
  • Audience segments that don’t match your real buyer.

These sources create the initial pollution. But when marketers try to compensate for underperformance by feeding the machine more data, the root cause never gets addressed. 

That’s when soft conversions like scrolls or downloads get added as primary signals, and none of them correlate to revenue.

Like humans, algorithms focus on the metrics they are fed.

If you mix soft signals with high-intent revenue data, you dilute the profile of your ideal customer. 

You end up winning thousands of cheap, low-value auctions that look great in a report but fail to move the needle on the P&L. 

Your job is to be the gatekeeper, ensuring only the most profitable signals reach the bidding engine.

When signal pollution takes hold, the algorithm doesn’t just underperform. The ads start drifting toward the wrong users, and performance begins to decline. 

Before you can build a strong signal strategy, you have to understand how to spot that drift early and correct it before it compounds.

How to detect and correct algorithm drift

Algorithm drift happens when Google’s automation starts optimizing toward the wrong outcomes because the signals it’s receiving no longer match your real advertising goals. 

Drift doesn’t show up as a dramatic crash. It shows up as a slow shift in who you reach, what queries you win, and which conversions the system prioritizes. It looks like a gradual deterioration of lead quality.

To stay in control, you need a simple way to spot drift early and correct it before the machine locks in the wrong pattern.

Early warning signs of drift include:

  • A sudden rise in cheap conversions that don’t correlate with revenue.
  • A shift in search terms toward lower-intent or irrelevant queries.
  • A drop in average order value or lead quality.
  • A spike in new-user volume with no matching lift in sales.
  • A campaign that looks healthy in-platform but feels wrong in the CRM or P&L.

These are all indicators that the system is optimizing toward the wrong signals.

To correct drift without resetting learning:

  • Tighten your conversion signals: Remove soft conversions, misfires, or anything that doesn’t map to revenue. The machine can’t unlearn bad data, but you can stop feeding it.
  • Reinforce the right audience patterns:  Upload fresh customer lists, refresh custom segments, and remove stale data. Drift often comes from outdated or diluted audience signals.
  • Adjust structure to isolate intent:  If a campaign blends high- and low-intent traffic, split it. Give the ad platform a cleaner environment to relearn the right patterns.
  • Refresh creative to repel the wrong users: Creative is a signal. If the wrong people are clicking, your ads are attracting them. Update imagery, language, and value props to realign intent.
  • Let the system stabilize before making another change: After a correction, give the campaign 5-10 days to settle. Overcorrecting creates more drift.

Your job isn’t to fight automation in Google Ads, it’s to guide it. 

Drift happens when the machine is left unsupervised with weak or conflicting signals. Strong signal hygiene keeps the system aligned with your real business outcomes.

Once you can detect drift and correct it quickly, you’re finally in a position to build a signal strategy that compounds over time instead of constantly resetting.

The next step is structuring your ad account so every signal reinforces the outcomes you actually want.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Building a strategy that actually works in 2026 with signals

If you want to build a signal strategy that becomes a competitive advantage, you have to start with the foundations.

For lead gen

Implement offline conversion imports. The difference between optimizing for a “form fill” and a “$50K closed deal” is the difference between wasting budget and growing a business. 

When “journey-aware bidding” eventually rolls out, it will be a game-changer because we can feed more data about the individual steps of a sale.

For ecommerce

Use value-based bidding. Don’t just count conversions. Differentiate between a customer buying a $20 accessory and one buying a $500 hero product.

Segment your data

Don’t just dump everyone into one list. A list of 5,000 recent purchasers is worth far more than 50,000 people who visited your homepage two years ago. 

Stale data hurts performance by teaching the algorithm to find people who matched your business 18 months ago, not today.

Separate brand and nonbrand campaigns

Brand traffic carries radically different intent and conversion rates than nonbrand. 

Mixing these campaigns forces the algorithm to average two incompatible behaviors, which muddies your signals and inflates your ROAS expectations. 

Brand should be isolated so it doesn’t subsidize poor nonbrand performance or distort bidding decisions in the ad platform.

Don’t mix high-ticket and low-ticket products under one ROAS target

A $600 product and a $20 product do not behave the same in auction-time bidding. 

When you put them in the same campaign with a single 4x ROAS target, the algorithm will get confused. 

This trains the system away from your hero products and toward low-value volume.

Centralize campaigns for data density, but only when the data belongs together

Google’s automation performs best when it has enough data to be consistent and high-quality data to recognize patterns. That means fewer, stronger campaigns are better as long as the signals inside them are aligned. 

Centralize campaigns when products share similar price points, margins, audiences, and intent. Decentralize campaigns when mixing them would pollute the signal pool.

The competitive advantage of 2026

When everyone has access to the same automation, the only real advantage left is the quality of the signals you feed it. 

Your job is to protect those signals, diagnose pollution early, and correct drift before the system locks onto the wrong patterns.

Once you build a deliberate signal strategy, Google’s automation stops being a constraint and becomes leverage. You stay in the loop, and the machine does the heavy lifting.

Anthropic says Claude will remain ad-free as ChatGPT tests ads

AI ad free vs. ad supported

Anthropic is drawing the line against advertising in AI chatbots. Claude will remain ad-free, the company said, even as rival AI platforms experiment with sponsored messages and branded placements inside conversations.

  • Ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude (for work, problem-solving, and sensitive topics), Anthropic said in a new blog post.

Why we care. Anthropic’s position removes Claude, and its user base of 30 million, from the AI advertising equation. Brands shouldn’t expect sponsored links, conversations, or responses inside Claude. Meanwhile, ChatGPT is about to give brands the opportunity to reach an estimated 800 million weekly users.

What’s happening. AI conversations are fundamentally different from search results or social feeds, where users expect a mix of organic and paid content, Anthropic said:

  • Many Claude interactions involve personal issues, complex technical work, or high-stakes thinking. Dropping ads into those moments would feel intrusive and could quietly influence responses in ways users can’t easily detect.
  • Ad incentives tend to expand over time, gradually optimizing for engagement rather than genuine usefulness.

Incentives matter. This is a business-model decision, not just a product preference, Anthropic said:

  • An ad-free assistant can focus entirely on what helps the user — even if that means a short exchange or no follow-up at all.
  • An ad-supported model, by contrast, creates pressure to surface monetizable moments or keep users engaged longer than necessary.
  • Once ads enter the system, users may start questioning whether recommendations are driven by help or by commerce.

Anthropic isn’t rejecting commerce. Claude will still help users research, compare, and buy products when they ask. The company is also exploring “agentic commerce,” where the AI completes tasks like bookings or purchases on a user’s behalf.

  • Commerce should be triggered by the user, not by advertisers, Anthropic said.
  • The same rule applies to third-party integrations like Figma or Asana. These tools will remain user-directed, not sponsored.

Super Bowl ad. Anthropic is making the argument publicly and aggressively. In a Super Bowl debut, the company mocked intrusive AI advertising by inserting fake product pitches into personal conversations. The ad closed with a clear message: “Ads are coming to AI. But not to Claude.”

  • The campaign appears to be a direct shot at OpenAI, which has announced plans to introduce ads into ChatGPT.
  • Here’s the ad:

Claude’s blog post. Claude is a space to think

OpenAI responds. OpenAI CEO Sam Altman posted some thoughts on X. Some of the highlights:

  • “…I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.
  • “I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.
  • “Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.
  • “We will continue to work hard to make even more intelligence available for lower and lower prices to our users.”

💾

Anthropic argues ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude.
❌