Reading view

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Salary: $75,000-$90,000 Hanson is seeking a data-driven strategist to join our team as a Digital Marketing Strategist. This role bridges the gap between marketing strategy, analytics and technology to help ensure our clients websites and digital tools perform at their highest potential. Youll work closely with cross-functional teams to optimize digital experiences, drive […]
  • Join Aya Healthcare, winner of multiple Top Workplace awards! We’re seeking a motivated SEO Strategist to join our fast-paced marketing team and help drive organic growth across multiple healthcare brands and websites under the Aya Healthcare umbrella. This role offers an exceptional opportunity to gain comprehensive corporate SEO experience while working alongside industry-leading professionals. Reporting […]
  • Who We Are With a legacy spanning four decades, Action Property Management has become the premier choice for homeowner’s association management. Founded in 1984, Action began with a single client and a vision to elevate ethical and professional standards in the HOA industry. Our unwavering commitment to integrity, and professionalism coupled with our core values […]
  • Job Description PLUS Incentive & Rich Benefit Plan Position Summary The Digital Marketing Manager is a key role responsible for the strategy, execution, and optimization of Olympic Hot Tub’s digital marketing efforts. You will work closely with the Company President and external partners to develop and manage cohesive digital campaigns that drive qualified traffic, generate […]
  • Job Description At VAL-CO we work together as a global leader in providing innovative, value-focused products and services to the poultry, livestock and horticultural industries. We believe in all that we do by valuing people, integrity, quality, profitability, and stewardship. VAL-CO recognizes the importance and value of our employees and their families, and our customers […]
  • POSITION DESCRIPTION Position: Website Content Manager Department: Office of Communications and Public Relations Reports To: Executive Director of Communications and Public Relations Classification: Exempt General Description The Website Content Manager develops, maintains, and optimizes archdiocesan websites and content to shape our online presence and ensure they align with and support the mission and priorities of […]
  • JobType: Full-Time (Exempt) Salary: $62,000 – $67,000 The Performance Marketing Specialist is responsible for optimizing QuaverEd’s website experiences to drive lead generation, trial conversion, and overall marketing performance. This role combines analytical insight, SEO strategy, and conversion rate optimization to improve how users discover, engage with, and move through QuaverEd’s digital funnel. Working closely with […]
  • Join our Team – Come for a Job Stay for a Career! Wearwell is a global industry leader in the anti-fatigue matting market. Our team members are more than just another number – they are family. As our business grows, so must we. We are seeking a Digital Marketing and E-Commerce Specialist to join our […]
  • We are looking for an experienced Senior SEO Specialist to lead advanced SEO strategy development, oversee multiple client projects, and drive measurable results in organic performance. This is a leadership-oriented position for a professional who combines deep technical expertise, strong analytical thinking, and strategic vision. As a Senior SEO Specialist, you’ll take ownership of SEO processes from comprehensive audits to keyword strategy, content architecture, and reporting while mentoring […]
  • Job Description Hi, we’re TechnologyAdvice. At TechnologyAdvice, we pride ourselves on helping B2B tech buyers manage the complexity and risk of the buying process. We are a trusted source of information for tech buyers, delivering advice and facilitating connections between our buyers and the world’s leading sellers of business technology. Headquartered in Nashville, Tennessee, we […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

Other roles you may be interested in

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

Paid Search Director, Grey Matter Recruitment (Remote)

  • Salary: $130,000 – $150,000
  • Own the activation and execution of Paid Search & Shopping activity across the Google Suite
  • Support wider eCommerce, Search and Digital team on strategy and plans

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

SEM (Search Engine Marketing) Manager, Tribute Technology (Remote)

  • Salary: $85,000 – $90,000
  • PPC Campaign Management: Execute and optimize multiple Google Ad campaigns and accounts simultaneously.
  • SEO Strategy Management: Develop and manage on-page SEO strategies for client websites using tools like Ahrefs.

Search Engine Optimization Manager, Robert Half (Hybrid, Boston MA)

  • Salary: $150,000 – $160,000
  • Strategic Leadership: Define and lead the strategy for SEO, AEO, and LLMs, ensuring alignment with overall business and product goals.
  • Roadmap Execution: Develop and implement the SEO/AEO/LLM roadmap, prioritizing performance-based initiatives and driving authoritative content at scale.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Performance Max built-in A/B testing for creative assets spotted

Why campaign-specific goals matter in Google Ads

Google is rolling out a beta feature that lets advertisers run structured A/B tests on creative assets within a single Performance Max asset group. Advertisers can split traffic between two asset sets and measure performance in a controlled experiment.

Why we care. Creative testing inside Performance Max has mostly relied on guesswork. Google’s new native A/B asset experiments bring controlled testing directly into PMax — without spinning up separate campaigns.

How it works. Advertisers choose one Performance Max campaign and asset group, then define a control asset set (existing creatives) and a treatment set (new alternatives). Shared assets can run across both versions. After setting a traffic split — such as 50/50 — the experiment runs for several weeks before advertisers apply the winning assets.

Why this helps. Running tests inside the same asset group isolates creative impact and reduces noise from structural campaign changes. The controlled split gives clearer reporting and helps teams make rollout decisions based on performance data rather than assumptions.

Early lessons. Initial testing suggests short experiments — especially under three weeks — often produce unstable results, particularly in lower-volume accounts. Longer runs and avoiding simultaneous campaign changes improve reliability.

Bottom line. Performance Max is becoming more testable. Advertisers can now validate creative decisions with built-in experiments instead of relying on trial and error.

First seen. Google Ads expert spotted the update and shared his view on LinkedIn.

Google Ads adds a diagnostics hub for data connections

Top 5 Google Ads opportunities you might be missing

Google Ads rolled out a new data source diagnostics feature in Data Manager that lets advertisers track the health of their data connections. The tool flags problems with offline conversions, CRM imports, and tagging mismatches.

How it works. A centralized dashboard assigns clear connection status labels — Excellent, Good, Needs attention, or Urgent — and surfaces actionable alerts. Advertisers can spot issues like refused credentials, formatting errors, and failed imports, alongside a run history that shows recent sync attempts and error counts.

Why we care. When conversion data breaks, campaign optimization breaks with it. Even small connection failures can quietly skew conversion tracking and weaken automated bidding. This diagnostic tool helps teams catch and fix issues early, protecting performance and reporting accuracy. If you rely on CRM imports or offline conversions, this provides a much-needed safety net.

Who benefits most. The feature is especially useful for advertisers running complex conversion pipelines, including Salesforce integrations and offline attribution setups, where small disruptions can quickly cascade into bidding and reporting issues.

The bigger picture. As automated bidding leans more heavily on accurate first-party data, visibility into data pipelines is becoming just as critical as campaign settings themselves.

Bottom line. Google Ads is giving advertisers an early warning system for data failures, helping teams fix broken connections before performance takes a hit.

First seen. The update was first spotted by digital marketer Georgi Zayakov, who shared the new option on LinkedIn.

Performance Max reporting for ecommerce: What Google is and isn’t showing you

Performance Max has come a long way since its rocky launch. Many advertisers once dismissed it as a half-baked product, but Google has spent the past 18 months fixing real issues around transparency and control. If you wrote Performance Max off before, it’s time to take another look.

Mike Ryan, head of ecommerce insights at Smarter Ecommerce, explained why at the latest SMX Next.

Taking a fresh look at Performance Max

Performance Max traces its roots to Smart Shopping campaigns, which Google rolled out with red carpet fanfare at Google Marketing Live in 2019.

Even then, industry experts warned that transparency and control would become serious issues. They were right — and only now has Google begun to address those concerns openly.

Smart Shopping marked the low point of black-box advertising in Google Ads, at least for ecommerce. It stripped away nearly every control advertisers relied on in Standard Shopping:

  • Promotional controls.
  • Modifiers.
  • Negative keywords.
  • Search terms reporting.
  • Placement reporting.
  • Channel visibility.

Over the past 18 months, Performance Max has brought most of that functionality back, either partially or in full.

Understanding Performance Max search terms

Search terms are a core signal for understanding the traffic you’re actually buying. In Performance Max, most spend typically flows to the search network, which makes search term reporting essential for meaningful optimization.

Google even introduced a Performance Max match type — something few of us ever expected to see. That’s a big deal. It delivers properly reportable data that works with the API, should be scriptable, and finally includes cost and time dimensions that were completely missing before.

Search term insights vs. campaign search term view

Google’s first move to crack open the black box was search term insights. These insights group queries into search categories — essentially prebuilt n-grams — that roll up data at a mid-level and automatically account for typos, misspellings, and variants.

The problem? The metrics are thin. There’s no cost data, which means no CPC, no ROAS, and no real way to evaluate performance.

The real breakthrough is the new campaign-level search term view, now available in both the API and the UI.

Historically, search term reporting lived at the ad group level. Since Performance Max doesn’t use ad groups, that data had nowhere to go.

Google fixed this by anchoring search terms at the campaign level instead. The result is access to far more segments and metrics — and, finally, proper reporting we can actually use.

The main limitation: this data is available only at the search network level, without separating search from shopping. That means a single search term may reflect blended performance from both formats, rather than a clean view of how each one performed.

Search theme reporting

Search themes act as a form of positive targeting in Performance Max. You can evaluate how they’re performing through the search term insights report, which includes a Source column showing whether traffic came from your URLs, your assets, or the search themes you provided.

By totaling conversion value and conversions, you can see whether your search themes are actually driving results — or just sitting idle.

There’s more good news ahead. Google appears to be working on bringing Dynamic Search Ads and AI Max reports into Performance Max. That would unlock visibility into headlines, landing pages, and the search terms triggering ads.

Search term controls and optimization

Negative keywords

Negative keywords are now fully supported in Performance Max. At launch, Google capped campaigns at 100 negatives, offered no API access, and blocked negative keyword lists—clearly positioning the feature for brand safety, not performance.

That’s changed. Negative keywords now work with the API, support shared lists, and give advertisers real control over performance.

These negatives apply across the entire search network, including both search and shopping. Brand exclusions are the exception — you can choose to apply those only to search campaigns if needed.

Brand exclusions

Performance Max doesn’t separate brand from generic traffic, and it often favors brand queries because they’re high intent and tend to perform well. Brand exclusions exist, but they can be leaky, with some brand traffic still slipping through. If you need strict control, negative keywords are the more reliable option.

Also, Performance Max — and AI Max — may aggressively bid on competitor terms. That makes brand and competitor exclusions important tools for protecting spend and shaping intent.

Optimization strategy

Here’s a simple heuristic for spotting search terms that need attention:

  • Calculate the average number of clicks it takes to generate a conversion.
  • Identify search terms with more clicks than that average but zero conversions.

Those terms have had a fair chance to perform and didn’t. They’re strong candidates for negative keywords.

That said, don’t overcorrect.

Long-tail dynamics mean a search term that doesn’t convert this month may matter next month. You’re also working with a finite set of negative keywords, so use them deliberately and prioritize the highest-impact exclusions.

Modern optimization approaches

It’s not 2018 anymore — you shouldn’t spend hours manually reviewing search terms. Automate the work instead.

Use the API for high-volume accounts, scripts for medium volume, and automated reports from the Report Editor for smaller accounts (though it still doesn’t support Performance Max).

Layer in AI for semantic review to flag irrelevant terms based on meaning and intent, then step in only for final approval. Search term reporting can be tedious, but with Google’s prebuilt n-grams and modern AI tools, there’s a smarter way to handle it.

Channels and placements reporting

Channel performance report

The channel performance report — not just for Performance Max — breaks performance out by network, including Discover, Display, Gmail, and more. It’s useful for channel visibility and understanding view-through versus click-through conversions, as well as how feed-based delivery compares to asset-driven performance.

The report includes a Sankey diagram, but it isn’t especially intuitive. The labeling is confusing and takes some decoding:

  • Search Network: Feed-based equals Shopping ads; asset-based equals RSAs and DSAs.
  • Display Network: Feed-based equals dynamic remarketing; asset-based equals responsive display ads.

Google also announced that Search Partner Network data is coming, which should add another layer of useful performance visibility.

Channel and placement controls

Unlike Demand Gen, where you can choose exactly which channels to run on, Performance Max doesn’t give you that control. You can try to influence the channel mix through your ROAS target and budget, but it’s a blunt instrument — and a slippery one at best.

Placement exclusions

The strongest control you have is excluding specific placements. Placement data is now available through the API — limited to impressions and date segments — and can also be reviewed in the Report Editor. Use this data alongside the content suitability view to spot questionable domains and spammy placements.

For YouTube, pay close attention to political and children’s content. If a placement feels irrelevant or unsafe for your brand, there’s a good chance it isn’t driving meaningful performance either.

Tools for placement review

If you run into YouTube videos in languages you don’t speak, use Google Sheets’ built-in GOOGLETRANSLATE function. It’s faster and more reliable than AI for quick translation.

You can also use AI-powered formulas in Sheets to do semantic triage on placements, not just search terms. These tools are just formulas, which means this kind of analysis is accessible to anyone.

Search Partner Network

Unfortunately, there’s no way to opt out of the Search Partner Network in Performance Max. You can exclude individual search partners, but there are limits.

Prioritize exclusions based on how questionable the placement looks and how much volume it’s receiving. Also note that Google-owned properties like YouTube and Gmail can’t be excluded.

Based on Standard Shopping data, the Search Partner Network consistently performs meaningfully worse than the Google Search Network. Excluding poor performers is recommended.

Device reporting and targeting

Creating a device report is easy — just add device as a segment in the “when and where ads showed” view. The tricky part is making decisions.

Device analysis

For deeper insight, dig into item-level performance in the Report Editor. Add device as a segment alongside item ID and product titles to see how individual products behave across devices. Also, compare competitor performance by device — you may spot meaningful differences that inform your strategy.

For example, you may perform far better on desktop than on mobile compared to competitors like Amazon, signaling either an opportunity or a risk.

Device targeting considerations

Device targeting is available in Performance Max and is easy to use, much like channel targeting in Demand Gen. But when you split campaigns by device, you also split your conversion data and volume—and that can hurt results.

Before you separate campaigns by device, consider:

  • How competition differs by device
  • Performance at the item and retail category level
  • The impact on overall data volume

Performance Max performs best with more data. Campaigns with low monthly conversion volume often miss their targets and rarely stay on pace. As more data flows through a campaign, Performance Max gets better at hitting goals and less likely to fall short.

Any gains from splitting by device can disappear if the algorithm doesn’t have enough data to learn. Only split when both resulting campaigns have enough volume to support effective machine learning.

Conclusion

Performance Max has changed dramatically since launch. With search term reporting, negative keywords, channel visibility, placement controls, and device targeting now available, advertisers have far more transparency and control than ever before.

It’s still not perfect — channel targeting limits and data fragmentation remain — but Performance Max is fundamentally different and far more manageable.

Success comes down to knowing what data you have, how to access it efficiently using modern tools like AI and automation, and when to apply controls based on performance insights and data volume needs.

Watch: PMax reporting for ecommerce: What Google is (and isn’t) showing you

💾

Explore how to make smarter use of search terms, channel and placement reports, and device-level performance to improve campaign control.

Why content that ranks can still fail AI retrieval

Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

Google & Bing don’t recommend separate markdown pages for LLMs

Representatives from both the Google Search and Bing Search teams are recommending against creating separate markdown (.md) pages for LLM purposes. The purpose is to serve one piece of content to the LLM and another piece of content to your users, which technically may be considered a form of cloaking and against Google’s policies.

The question. Lily Ray asked on Bluesky:

  • “Not sure if you can answer, but starting to hear a lot about creating separate markdown / JSON pages for LLMs and serving those URLs to bots.”

Google’s response. John Mueller from Google responded saying:

  • “I’m not aware of anything in that regard. In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

Recently, John Mueller also called the idea stupid, saying:

  • “Converting pages to markdown is such a stupid idea. Did you know LLMs can read images? WHY NOT TURN YOUR WHOLE SITE INTO AN IMAGE?” That is of course, converting your whole site to an MD file, which is a bit extreme, to say the least.

I did collect a lot of John Mueller’s comments on this topic, over here.

Bing’s response. Fabrice Canel from Microsoft Bing responded saying:

  • “Lily: really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”

Why we care. Some of us like to look for shortcuts to perform well on search engines and now the new AI search engines and LLMs. Generally, shortcuts, if they work, only work for a limited time. Plus, these shortcuts can have an unexpected negative effect.

As Lily Ray wrote on LinkedIn:

  • “I’ve had concerns the entire time about managing duplicate content and serving different content to crawlers than to humans, which I understand might be useful for AI search but directly violates search engines’ longstanding policies about this (basically cloaking).”

Your local rankings look fine. So why are calls disappearing?

Local SEO Alligator

For many local businesses, performance looks healthier than it is.

Rank trackers still show top-three positions. Visibility reports appear steady. Yet calls and website visits from Google Business Profiles are falling — sometimes fast.

This gap is becoming a defining feature of local search today.

Rankings are holding. Visibility and performance aren’t.

The alligator has arrived in local SEO.

The visibility crisis behind stable rankings

Across multiple U.S. industries, traditional local 3-packs are being replaced — or at least supplemented — by AI-powered local packs. These layouts behave differently from the map results we’ve optimized in the past.

Analysis from Sterling Sky, based on 179 Google Business Profiles, reveals a pattern that’s hard to ignore. Clicks-to-call are dropping sharply for Jepto-managed law firms.

When AI-powered packs replace traditional listings, the landscape shifts in four critical ways:

  • Shrinking real estate: AI packs often surface only two businesses instead of three.
  • Missing call buttons: Many AI-generated summaries remove instant click-to-call options, adding friction to the customer journey.
  • Different businesses appear: The businesses shown in AI packs often don’t match those in the traditional 3-pack.
  • Accelerated monetization of local search: When paid ads are present, traditional 3-packs increasingly lose direct call and website buttons, reducing organic conversion opportunities.

A fifth issue compounds the problem:

  • Measurement blind spots: Most rank trackers don’t yet report on AI local packs. A business may rank first in a 3-pack that many users never see.

AI local packs surfaced only 32% as many unique businesses as traditional map packs in 2026, according to Sterling Sky. In 88% of the 322 markets analyzed, the total number of visible businesses declined.

At the same time, paid ads continue to take over space once reserved for organic results, signaling a clear shift toward a pay-to-play local landscape.

What Google Business Profile data shows

The same pattern appears, especially in the U.S., where Google is aggressively testing new local formats, according to GMBapi.com data. Traditional local 3-pack impressions are increasingly displaced by:

  • AI-powered local packs.
  • Paid placements inside traditional map packs: Sponsored listings now appear alongside or within the map pack, pushing organic results lower and stripping listings of call and website buttons. This breaks organic customer journeys.
  • Expanded Google Ads units: Including Local Services Ads that consume space once reserved for organic visibility.

Impression trends still fluctuate due to seasonality, market differences, and occasional API anomalies. But a much clearer signal emerges when you look at GBP actions rather than impressions.

Mentions inside AI-generated results are still counted as impressions — even when they no longer drive calls, clicks, or visits.

Some fluctuations are driven by external factors. For example, the June drop ties back to a known Google API issue. Mobile Maps impressions also appear heavily influenced by large advertisers ramping up Google Ads later in the year.

There’s no way to segment these impressions by Google Ads, organic results, or AI Mode.

Even there, however, user behaviour is changing. Interaction rates are declining, with fewer direct actions taken from local listings.

Year-on-year comparisons in the US suggest that while impression losses remain moderate and partially seasonal, GBP actions are disproportionately impacted.

As a counterfactual, data from the Dutch market — where SERP experimentation remains limited — shows far more stable action trends.

The pattern is clear. AI-driven SERP changes, expanding Google Ads, and the removal of call and website buttons from the Map Pack are shrinking organic real estate. Even when visibility looks intact, businesses have fewer chances to earn real user actions.

Local SEO is becoming an eligibility problem

Historically, local optimization centered on familiar ranking factors: proximity, relevance, prominence, reviews, citations, and engagement.

Today, another layer sits above all of them: eligibility.

Many businesses fail to appear in AI-powered local results not because they lack authority, but because Google’s systems decide they aren’t an appropriate match for the specific query context. Research from Yext and insights from practitioners like Claudia Tomina highlight the importance of alignment across three core signals:

  • Business name
  • Primary category
  • Real-world services and positioning

When these fundamentals are misaligned, businesses can be excluded from entire result types — no matter how well optimized the Google Business Profile itself may be.

How to future-proof local visibility

Surviving today’s zero-click reality means moving beyond reliance on a single, perfectly optimized Google Business Profile. Here’s your new local SEO playbook.

The eligibility gatekeeper

Failure to appear in local packs is now driven more by perceived relevance and classification than by links or review volume.

Hyper-local entity authority

AI systems cross-reference Reddit, social platforms, forums, and local directories to judge whether a business is legitimate and active. Inconsistent signals across these ecosystems quietly erode visibility.

Visual trust signals

High-quality, frequently updated photos, and increasingly video, are no longer optional. Google’s AI analyzes visual content to infer services, intent, and categorization.

Embrace the pay-to-play reality

It’s a hard truth, but Google Ads — especially Local Services Ads — are now critical to retaining prominent call buttons that organic listings are losing. A hybrid strategy that blends local SEO with paid search isn’t optional. It’s the baseline.

What this means for local search now

Local SEO is no longer a static directory exercise. Google Business Profiles still anchor local discoverability, but they now operate inside a much broader ecosystem shaped by AI validation, constant SERP experimentation, and Google’s accelerating push to monetize local search.

Discovery no longer hinges on where your GBP ranks against nearby competitors. Search systems — including Google’s AI-driven SERP features and large language models like ChatGPT and Gemini — are increasingly trying to understand what a business actually does, not just where it’s listed.

Success is no longer about being the most “optimized” profile. It’s about being widely verified, consistently active, and contextually relevant across the AI-visible ecosystem.

Our observations show little correlation between businesses that rank well in the traditional Map Pack and those favored by Google’s AI-generated local answers that are beginning to replace it. That gap creates a real opportunity for businesses willing to adapt.

In practice, this means pairing local input with central oversight.

Authentic engagement across multiple platforms, locally differentiated content, and real community signals must coexist with brand governance, data consistency, and operational scale. For single-location businesses with deep community roots, this is an advantage. Being genuinely discussed, recommended, and referenced in your local area — online and offline — gets you halfway there.

For agencies and multi-location brands, the challenge is to balance control with local nuance and ensure trusted signals extend beyond Google (e.g., Apple Maps, Tripadvisor, Yelp, Reddit, and other relevant review ecosystems). The real test is producing locally relevant content and citations at scale without losing authenticity.

Rankings may look stable. But performance increasingly lives somewhere else.

The full data. Local SEO in 2026: Why Your Rankings are Steady but Your Calls are Vanishing

Google releases February 2026 Discover core update

Google has released the February 2026 Discover core update, which focuses specifically on how content is surfaced in Google Discover.

  • “This is a broad update to our systems that surface articles in Discover,” Google wrote.

Google said the update is rolling out first to English-language users in the U.S. and will expand to all countries and languages in the coming months. The rollout may take up to two weeks to complete, Google added.

What is expected. Google said the Discover core update will improve the “experience in a few key ways,” including:

  • Showing users more locally relevant content from websites based in their country.
  • Reducing sensational content and clickbait.
  • Highlighting more in-depth, original, and timely content from sites with demonstrated expertise in a given area, based on Google’s understanding of a site’s content.

Because the update prioritizes locally relevant content, it may reduce traffic for non-U.S. websites that publish news for a U.S. audience. That impact may lessen or disappear as the update expands globally.

More details. Google added that many sites demonstrate deep knowledge across a wide range of subjects, and its systems are built to identify expertise on a topic-by-topic basis. As a result, any site can appear in Discover, whether it covers multiple areas or focuses deeply on a single topic. Google shared an example:

  • “A local news site with a dedicated gardening section could have established expertise in gardening, even though it covers other topics. In contrast, a movie review site that wrote a single article about gardening would likely not.”

Google said it will continue to “show content that’s personalized based on people’s creator and source preferences.”

During testing, Google found that “people find the Discover experience more useful and worthwhile with this update.”

Expect fluctuations. With this Discover core update, expect fluctuations in traffic from Google Discover.

  • “Some sites might see increases or decreases; many sites may see no change at all,” Google said.

Rollout. Google said it is “releasing this update to English language users in the US, and will expand it to all countries and languages in the months ahead. “

Why we care. If you get traffic from Google Discover, you may notice changes in that traffic in the coming days. Google recommends that if you need guidance, Google has “general guidance about core updates applies, as does our Get on Discover help page” in those help documents.

Google Ads no longer runs on keywords. It runs on intent.

Why Google Ads auctions now run on intent, not keywords

Most PPC teams still build campaigns the same way: pull a keyword list, set match types, and organize ad groups around search terms. It’s muscle memory.

But Google’s auction no longer works that way.

Search now behaves more like a conversation than a lookup. In AI Mode, users ask follow-up questions and refine what they’re trying to solve. AI Overviews reason through an answer first, then determine which ads support that answer.

In Google Ads, the auction isn’t triggered by a keyword anymore – it’s triggered by inferred intent.

If you’re still structuring campaigns around exact and phrase match, you’re planning for a system that no longer exists. The new foundation is intent: not the words people type, but the goals behind them.

An intent-first approach gives you a more durable way to design campaigns, creative, and measurement as Google introduces new AI-driven formats.

Keywords aren’t dead, but they’re no longer the blueprint.

The mechanics under the hood have changed

Here’s what’s actually happening when someone searches now.

Google’s AI uses a technique called “query fan out,” splitting a complex question into subtopics and running multiple concurrent searches to build a comprehensive response.

The auction happens before the user even finishes typing.

And crucially, the AI infers commercial intent from purely informational queries.

For instance, someone asks, “Why is my pool green?” They’re not shopping. They’re troubleshooting.

But Google’s reasoning layer detects a problem that products can solve and serves ads for pool-cleaning supplies alongside the explanation. While the user didn’t search for a product, the AI knew they would need one.

This auction logic is fundamentally different from what we’re accustomed to. It’s not matching your keyword to the query. It’s matching your offering to the user’s inferred need state, based on conversational context. 

If your campaign structure still assumes people search in isolated, transactional moments, you’re missing the journey entirely.

Anatomy of a Google AI search query

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

What ‘intent-first’ actually means

An intent-first strategy doesn’t mean you stop doing keyword research. It means you stop treating keywords as the organizing principle.

Instead, you map campaigns to the why behind the search.

  • What problem is the user trying to solve?
  • What stage of decision-making are they in?
  • What job are they hiring your product to do?

The same intent can surface through dozens of different queries, and the same query can reflect multiple intents depending on context.

“Best CRM” could mean either “I need feature comparisons” or “I’m ready to buy and want validation.” Google’s AI now reads that difference, and your campaign structure should, too.

This is more of a mental model shift than a tactical one.

You’re still building keyword lists, but you’re grouping them by intent state rather than match type.

You’re still writing ad copy, but you’re speaking to user goals instead of echoing search terms back at them.

Get the newsletter search marketers rely on.


What changes in practice

Once campaigns are organized around intent instead of keywords, the downstream implications show up quickly – in eligibility, landing pages, and how the system learns.

Campaign eligibility

If you want to show up inside AI Overviews or AI Mode, you need broad match keywords, Performance Max, or the newer AI Max for Search campaigns.

Exact and phrase match still work for brand defense and high-visibility placements above the AI summaries, but they won’t get you into the conversational layer where exploration happens.

Landing page evolution

It’s not enough to list product features anymore. If your page explains why and how someone should use your product (not just what it is), you’re more likely to win the auction.

Google’s reasoning layer rewards contextual alignment. If the AI built an answer about solving a problem, and your page directly addresses that problem, you’re in.

Asset volume and training data

The algorithm prioritizes rich metadata, multiple high-quality images, and optimized shopping feeds with every relevant attribute filled in.

Using Customer Match lists to feed the system first-party data teaches the AI which user segments represent the highest value.

That training affects how aggressively it bids for similar users.

Dig deeper: In Google Ads automation, everything is a signal in 2026

The gaps worth knowing about

Even as intent-first campaigns unlock new reach, there are still blind spots in reporting, budget constraints, and performance expectations you need to plan around.

No reporting segmentation

Google doesn’t provide visibility into how ads perform specifically in AI Mode versus traditional search.

You’re monitoring overall cost-per-conversion and hoping high-funnel clicks convert downstream, but you can’t isolate which placements are actually driving results.

The budget barrier

AI-powered campaigns like Performance Max and AI Max need meaningful conversion volume to scale effectively, often 30 conversions in 30 days at a minimum.

Smaller advertisers with limited budgets or longer sales cycles face what some call a “scissors gap,” in which they lack the data needed to train algorithms and compete in automated auctions.

Funnel position matters

AI Mode attracts exploratory, high-funnel behavior. Conversion rates won’t match bottom-of-the-funnel branded searches. That’s expected if you’re planning for it.

It becomes a problem when you’re chasing immediate ROAS without adjusting how you define success for these placements.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

Where to start

You don’t need to rebuild everything overnight.

Pick one campaign where you suspect intent is more complex than the keywords suggest. Map it to user goal states instead of search term buckets.

Test broad match in a limited way. Rewrite one landing page to answer the “why” instead of just listing specs.

The shift to intent-first is not a tactic – it’s a lens. And it’s the most durable way to plan as Google keeps introducing new AI-driven formats.

Google says AI search is driving an ‘expansionary moment’

Google money printing machine

Google Search is entering an “expansionary moment,” fueled by longer queries, more follow-up questions, and rising use of voice and images. That’s according to Alphabet’s executives who spoke on last night’s Q4 earnings call.

  • In other words: Google Search is shifting toward AI-driven experiences, with more conversations happening inside Google’s own interfaces.

Why we care. AI in Google Search is no longer an experiment. It’s a structural shift that’s changing how people search and reshaping discovery, visibility, and traffic across the web.

By the numbers. Alphabet’s Q4 advertising revenue totaled $82.284 billion, up 13.5% from $72.461 billion 2024:

  • Google Search & other: $63.073 billion (up 16.7%)
  • YouTube: $11.383 billion (up 8.7%)
  • Google Network: $7.828 billion ( down 1.5%)

Alphabet’s 2025 fiscal year advertising revenue totaled $294.691 billion, up 11.4% from $264.590 billion in 2024:

  • Google Search & other: $224,532 billion (up 13.4%)
  • YouTube: $40.367 billion (up 11.7%)
  • Google Network: $29.792 billion ( down 1.9%)

AI Overviews and AI Mode are now core to Search. Alphabet CEO Sundar Pichai said Google pushed aggressively on AI-powered search features in Q4, highlighting how central they’ve become to the product.

  • “We shipped over 250 product launches, within AI mode and AI overviews just last quarter,” Pichai said.

This includes Google upgrading AI Overviews to its Gemini 3 model. He said the company has tightly linked AI Overviews with conversational search.

  • “We have also made the search experience more cohesive, ensuring the transition from an AI Overview to a conversation in AI Mode is completely seamless,” Pichai said.

AI is driving more Google Search usage. Executives repeatedly described AI-driven search as additive, saying it boosts overall usage rather than replacing traditional queries.

  • “Search saw more usage in Q4 than ever before, as AI continues to drive an expansionary moment,” Pichai said.

Engagement rises once users interact with AI-powered features, Google said.

  • “Once people start using these new experiences, they use them more,” Pichai said.

Changing search behavior. Google shared new data points showing how AI Mode is changing search behavior — making queries longer, more conversational, and increasingly multimodal.

  • “Queries in AI Mode are three times longer than traditional searches,” Pichai said.

Sessions are also becoming more conversational.

  • “We are also seeing sessions become more conversational, with a significant portion of queries in AI Mode, now leading to a follow-up question,” he said.

AI Mode is also expanding beyond text.

  • “Nearly one in six AI mode queries are now non-text using voice or images,” Pichai said.

Google highlighted continued distribution of visual search capabilities, noting that:

  • “Circle to Search is now available on over 580 million Android devices,” Pichai said.

Gemini isn’t cannibalizing Search. As the Gemini app continues to grow, Google says it hasn’t seen signs that users are abandoning Search.

  • “We haven’t seen any evidence of cannibalization,” Pichai said.

Instead, Google said users move fluidly between Search, AI Overviews, AI Mode, and the Gemini app.

  • “The combination of all of that, I think, creates an expansionary moment,” Pichai said.

How AI is reshaping local search and what enterprises must do now

Local search in the AI-first era: From rankings to recommendations in 2026

AI is no longer an experimental layer in search. It’s actively mediating how customers discover, evaluate, and choose local businesses, increasingly without a traditional search interaction. 

The real risk is data stagnation. As AI systems act on local data for users, brands that fail to adapt risk declining visibility, data inconsistencies, and loss of control over how locations are represented across AI surfaces.

Learn how AI is changing local search and what you can do to stay visible in this new landscape. 

How AI search is different from traditional search

traditional vs ai-search

We are experiencing a platform shift where machine inference, not database retrieval, drives decisions. At the same time, AI is moving beyond screens into real-world execution.

AI now powers navigation systems, in-car assistants, logistics platforms, and autonomous decision-making.

In this environment, incorrect or fragmented location data does not just degrade search.

It leads to missed turns, failed deliveries, inaccurate recommendations, and lost revenue. Brands don’t simply lose visibility. They get bypassed.

Business implications in an AI-first, zero-click decision layer 

Local search has become an AI-first, zero-click decision layer.

Multi-location brands now win or lose based on whether AI systems can confidently recommend a location as the safest, most relevant answer.

That confidence is driven by structured data quality, Google Business Profile excellence, reviews, engagement, and real-world signals such as availability and proximity.

For 2026, the enterprise risk is not experimentation. It’s inertia.

Brands that fail to industrialize and centralize local data, content, and reputation operations will see declining AI visibility, fragmented brand representation, and lost conversion opportunities without knowing why.

Paradigm shifts to understand 

Here are four key ways the growth in AI search is changing the local journey:

  • AI answers are the new front door: Local discovery increasingly starts and ends inside AI answers and Google surfaces, where users select a business directly.
  • Context beats rankings: AI weighs conversation history, user intent, location context, citations, and engagement signals, not just position.
  • Zero-click journeys dominate: Most local actions now happen on-SERP (GBP, AI Overviews, service features), making on-platform optimization mission-critical.
  • Local search in 2026 is about being chosen, not clicked: Enterprises that combine entity intelligence, operational rigor by centralizing data and creating consistency, and on-SERP conversion discipline will remain visible and preferred as AI becomes the primary decision-maker.

Businesses that don’t grasp these changes quickly won’t fall behind quietly. They’ll be algorithmically bypassed.

Dig deeper: The enterprise blueprint for winning visibility in AI search

How AI composes local results (and why it matters)

AI systems build memory through entity and context graphs. Brands with clean, connected location, service, and review data become default answers.

Local queries increasingly fall into two intent categories: objective and subjective. 

  • Objective queries focus on verifiable facts:
    • “Is the downtown branch open right now?”
    • “Do you offer same-day service?”
    • “Is this product in stock nearby?”
  • Subjective queries rely on interpretation and sentiment:
    • “Best Italian restaurant near me”
    • “Top-rated bank in Denver”
    • “Most family-friendly hotel”

This distinction matters because AI systems treat risk differently depending on intent.

For objective queries, AI models prioritize first-party sources and structured data to reduce hallucination risk. These answers often drive direct actions like calls, visits, and bookings without a traditional website visit ever occurring.

For subjective queries, AI relies more heavily on reviews, third-party commentary, and editorial consensus. This data normally comes from various other channels, such as UGC sites.  

Dig deeper: How to deploy advanced schema at scale

Source authority matters

Industry research has shown that for objective local queries, brand websites and location-level pages act as primary “truth anchors.”

When an AI system needs to confirm hours, services, amenities, or availability, it prioritizes explicit, structured core data over inferred mentions.

Consider a simple example. If a user asks, “Find a coffee shop near me that serves oat milk and is open until 9,” the AI must reason across location, inventory, and hours simultaneously.

If those facts are not clearly linked and machine-readable, the brand cannot be confidently recommended.

This is why freshness, relevance, and machine clarity, powered by entity-rich structured data, help AI systems interpret the right response. 

Set yourself up for success

Ensure your data is fresh, relevant, and clear with these tips:

  • Build a centralized entity and context graph and syndicate it consistently across GBP, listings, schema, and content.
  • Industrialize local data and entities by developing one source of truth for locations, services, attributes, inventory – continuously audited and AI-normalized.
  • Make content AI-readable and hyper-local with structured FAQs, services, and how-to content by location, optimized for conversational and multimodal queries.
  • Treat GBP as a product surface with standardized photos, services, offers, and attributes — localized and continuously optimized.
  • Operationalize reviews and reputation by implementing always-on review generation, AI-assisted responses, and sentiment intelligence feeding CX and operations.
  • Adopt AI-first measurement and governance to track AI visibility, local answer share, and on-SERP conversions — not just rankings and traffic.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

The evolution of local search from listings management to an enterprise local journey

Historically, local search was managed as a collection of disconnected tactics: listings accuracy, review monitoring, and periodic updates to location pages.

That operating model is increasingly misaligned with how local discovery now works.

Local discovery has evolved into an end-to-end enterprise journey – one that spans data integrity, experience delivery, governance, and measurement across AI-driven surfaces.

Listings, location pages, structured data, reviews, and operational workflows now work together to determine whether a brand is trusted, cited, and repeatedly surfaced by AI systems.

Introducing local 4.0

Local 4.0 is a practical operating model for AI-first local discovery at an enterprise scale. The focus of this framework is to ensure your brand is callable, verifiable, and safe for AI systems to recommend. 

To understand why this matters, it helps to look at how local has evolved:

The evolution of local
  • Local 1.0 – Listings and basic NAP consistency: The goal was presence – being indexed and included.
  • Local 2.0 – Map pack optimization and reviews: Visibility was driven by proximity, profile completeness, and reputation.
  • Local 3.0 – Location pages, content, and ROI: Local became a traffic and conversion driver tied to websites.
  • Local 4.0 – AI-mediated discovery and recommendation: Local becomes decision infrastructure, not a channel.

Local 4.0 is a new operating model for AI-first local discovery at enterprise scale. The focus is on understanding, verifying, and recommending based on consumer intent.  

  • Understandable by AI systems (clean, structured, connected data).
  • Verifiable across platforms (consistent facts, citations, reviews).
  • Safe to recommend in real-world decision contexts.

In an AI-mediated environment, brands are no longer merely present. They are selected, reused, or ignored – often without a click. This is the core transformation enterprise leaders must internalize as they plan for 2026.

Dig deeper: AI and local search: The new rules of visibility and ROI

Get the newsletter search marketers rely on.


The local 4.0 journey for enterprise brands

four step enterprise local journey

Step 1: Discovery, consistency, and control

Discovery in an AI-driven environment is fundamentally about trust. When data is inconsistent or noisy, AI systems treat it as a risk signal and deprioritize it.

Core elements include:

  • Consistency across websites, profiles, directories, and attributes.
  • Listings as verification infrastructure.
  • Location pages as primary AI data sources.
  • Structured data and indexing as the machine clarity layer.
ensuring consistency across owned channels

Why ‘legacy’ sources still matter

Listings act as verification infrastructure. Interestingly, research suggests that LLMs often cross-reference data against highly structured legacy directories (such as MapQuest or the Yellow Pages).

While human traffic to these sites has waned, AI systems utilize them as “truth anchors” because their data is rigidly structured and verified.

If your hours are wrong on MapQuest, an AI agent may downgrade its confidence in your Google Business Profile, viewing the discrepancy as a risk.

Discovery is no longer about being crawled. It’s about being trusted and reused. Governance matters because ownership, workflows, and data quality now directly affect brand risk.

Dig deeper: 4 pillars of an effective enterprise AI strategy 

Step 2: Engagement and freshness 

AI systems increasingly reward data that is current, efficiently crawled, and easy to validate.

Stale content is no longer neutral. When an AI system encounters outdated information – such as incorrect hours, closed locations, or unavailable services – it may deprioritize or avoid that entity in future recommendations.

For enterprises, freshness must be operationalized, not managed manually. This requires tightly connecting the CMS with protocols like IndexNow, so updates are discovered and reflected by AI systems in near real time.

Beyond updates, enterprises must deliberately design for local-level engagement and signal velocity. Fresh, locally relevant content – such as events, offers, service updates, and community activity – should be surfaced on location pages, structured with schema, and distributed across platforms.

In an AI-first environment, freshness is trust, and trust determines whether a location is surfaced, reused, or skipped entirely.

Unlocking ‘trapped’ data

A major challenge for enterprise brands is “trapped” data, which is vital information, often locked behind PDFs, menu images, or static event calendars.

For example, a restaurant group may upload a PDF of their monthly live music schedule. To a human, this is visible. To a search crawler, it’s often opaque. In an AI-first era, this data must be extracted and structured.

If an agent cannot read the text inside the PDF, it cannot answer the query: “Find a bar with live jazz tonight.”

Key focus areas include:

  • Continuous content freshness.
  • Efficient indexing and crawl pathways.
  • Dynamic local updates such as events, availability, and offerings.

At enterprise scale, manual workflows break. Freshness is no longer tactical. It’s a competitive requirement.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Step 3: Experience and local relevance

AI does not select the best brand. It selects the location that best resolves intent.

Generic brand messaging consistently loses out to locally curated content. AI retrieval is context-driven and prioritizes specific attributes such as parking availability, accessibility, accepted insurance, or local services.

This exposes a structural problem for many enterprises: information is fragmented across systems and teams.

Solving AI-driven relevance requires organizing data as a context graph. This means connecting services, attributes, FAQs, policies, and location details into a coherent, machine-readable system that maps to customer intent rather than departmental ownership.

Enterprises should also consider omnichannel marketing approaches to achieve consistency.   

Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement

Step 4: Measurement that executives can trust

As AI-driven and zero-click journeys increase, traditional SEO metrics lose relevance. Attribution becomes fragmented across search, maps, AI interfaces, and third-party platforms.

Precision tracking gives way to directional confidence.

Executive-level KPIs should focus on:

  • AI visibility and recommendation presence.
  • Citation accuracy and consistency.
  • Location-level actions (calls, directions, bookings).
  • Incremental revenue or lead quality lift.

The goal is not perfect attribution. It’s confidence that local discovery is working and revenue risk is being mitigated.

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

Why local 4.0 needs to be the enterprise response

Fragmentation is a material revenue risk. When local data is inconsistent or disconnected, AI systems have lower confidence in it and are less likely to reuse or recommend those locations.

Treating local data as a living, governed asset and establishing a single, authoritative source of truth early prevents incorrect information from propagating across AI-driven ecosystems and avoids the costly remediation required to fix issues after they scale.

AI-mediated discovery is now the default – and local 4.0 gives enterprises control, confidence, and competitiveness by aligning data, experience, and governance into the AI discovery flywheel.

This isn’t about chasing trends; it’s about ensuring your brand is accurately represented and confidently chosen wherever customers discover you next.

Dig deeper: How to select a CMS that powers SEO, personalization and growth

Local 4.0 is integral to the localized AI discovery flywheel

AI discovery flywheel

AI-mediated discovery is becoming the default interface between customers and local brands.

Local 4.0 provides a framework for control, confidence, and competitiveness in that environment. It aligns data, experience, and governance around how AI systems actually operate through reasoning, verification, and reuse.

This is not about chasing AI trends. It’s about ensuring your brand is correctly represented and confidently recommended wherever customers discover you next.

Why SEO teams need to ask ‘should we use AI?’ not just ‘can we?’

Human Judgment vs Machine Output

Right now, it’s hard to find a marketing conversation that doesn’t include two letters: AI.

SEOs, strategists, and marketing leaders everywhere are asking the same question in different ways:

  • How do we use AI to cut manpower, streamline work, move faster, and boost efficiency?

Much of that thinking makes sense. If you run a business, you can’t ignore a tool that turns hours of grunt work into minutes. You’d be foolish to try.

But we’re spending too much time asking, “Can AI do this?” and not enough time asking, “Should AI do this?”

Once the initial excitement fades, some uncomfortable questions show up.

  • If every title tag, meta description, landing page, and blog post comes from AI, where does differentiation come from?
  • If every outreach email, proposal, and report is machine-generated, what happens to trust?
  • If AI agents start talking to other AI agents on our behalf, what happens to judgment, creativity, and the human side of business?

This isn’t anti-AI. I use AI. My team uses AI. You probably do, too.

This is about using AI well, using it intentionally, and not automating so much that you accidentally automate away the things that make you valuable.

What ‘automating too much’ looks like in SEO

The slippery part of automation? It rarely starts with big decisions. It starts with small ones that feel harmless.

First, you automate the boring admin. Then the repetitive writing. Then the analysis. Then client communication. Then, quietly, decision-making.

In SEO, “too much” often looks like this:

  • Meta titles and descriptions generated at scale, with barely any review.
  • Content briefs created by AI from SERP summaries, then passed straight to an AI writer for drafting.
  • On-page changes rolled out across templates because “the model recommended it.”
  • Link building outreach written by AI, sent at volume, and ignored at volume.
  • Reporting that is technically accurate but disconnected from what the business actually cares about.

If this sounds harsh, that’s because it happens fast.

The promise is always “we’ll save time.” What usually happens is you save time and lose something else. Most often, you lose the sense that your marketing has a brain behind it.

The sameness problem: if everyone uses the same tools, who wins?

This is the question I keep coming back to.

If everyone uses AI to create everything, the web fills up with content that looks and sounds the same. It might be polished. It might even be technically “good.” But it becomes interchangeable.

That creates two problems:

  • Users get bored. They read one page, then another, and it’s the same advice dressed up with slightly different words. You might win a click. You’ll struggle to build a relationship.
  • Search engines and language models still need ways to tell you apart. When content converges, the real differentiators become things like:
    • Brand recognition.
    • Original data or firsthand experience.
    • Clear expertise and accountability.
    • Signals that other people trust you.
    • Distinct angles and opinions.

The irony?

Heavy automation often strips those things out. It produces “fine” content quickly, but it also produces content that could have come from anyone.

If your goal is authority, being indistinguishable isn’t neutral. It’s a liability.

When AI starts quoting AI, reality gets blurry

This is where things start to get strange.

We’re already heading into a world where AI tools summarize content, other tools re-summarize those summaries, and someone publishes the result as if it’s new insight. It becomes a loop.

If you’ve ever asked a tool to write a blog post and it felt familiar but hard to place, that’s usually why. It isn’t creating knowledge from scratch. It’s remixing patterns.

Now imagine that happening at scale. Search engines crawl pages. Models summarize them. Businesses publish new pages based on those summaries. Agents use those pages to answer questions. Repeat.

Remove humans from the loop for too long, and you risk an internet that feels like it’s talking to itself. Plenty of words. Very little substance.

From an SEO perspective, that’s a serious problem. When the web floods with similar information, value shifts away from “who wrote the neatest explanation” and toward “who has something real to add.”

That’s why I keep coming back to the same point. The question isn’t “can AI do this?” It’s “should we use AI here, or should a human own this?”

The creativity and judgment problem

There’s a quieter risk we don’t talk about enough.

If you let AI write every proposal, every contract, every strategy deck, and every content plan, you start outsourcing judgment.

You may still be the one who clicks “generate” and “send,” but the thinking has moved somewhere else.

Over time, you lose the habit of critical thinking. Not because you can’t think, but because you stop practicing. It’s the same way GPS makes you worse at directions. You can still drive, but you stop building the skill.

In SEO, judgment is one of our most valuable assets. Knowing:

  • What to prioritize.
  • What to ignore.
  • When a dip is normal and when it is a warning sign.
  • When the data is lying because the tracking is broken.

AI can support decisions, but it can’t own them. If you automate that away, you risk becoming a delivery machine instead of a strategist. And authority doesn’t come from delivery.

The trust problem: clients do not just buy outputs

Here’s a reality check agency owners feel in their bones.

Clients don’t stay because you can do the work. They stay because they:

  • Trust you.
  • Feel looked after.
  • Believe you have their best interests at heart.
  • Like working with you.

It’s business, but it’s still human.

When you automate too much of the client experience, your service can start to feel cheap. Not in price, but in care.

  • If every email sounds generated, clients notice.
  • If every report is a generic summary with no opinion, clients notice.
  • If every deliverable looks like it came straight from a tool, clients start asking why they are paying you instead of the tool.

The same thing happens in-house. Stakeholders want confidence. They want interpretation. They want someone to say, “This is what matters, and this is what we should do next.”

AI is excellent at producing outputs. It isn’t good at reassurance, context, or accountability. Those are human services, even when the work is digital.

The accuracy and responsibility problem

If you automate content production without proper oversight, eventually you’ll publish something wrong.

Sometimes it’s small. A definition that is slightly off. A stat that is outdated. A recommendation that doesn’t fit the situation.

Sometimes it’s serious. Incorrect medical advice. Legal misinformation. Financial guidance that should never have gone live.

Even in low-risk niches, accuracy matters. When your content is wrong, trust erodes. When it’s wrong with confidence, trust disappears faster.

The more you scale AI output, the harder quality control becomes. That is where automation turns dangerous. You can produce content at speed, but you may not spot the decay until performance drops or, worse, a customer calls it out publicly.

Authority is fragile. It takes time to build and seconds to lose. Automation increases that risk because mistakes don’t stay small. They scale.

The confidentiality problem that nobody wants to admit

This is the part that often gets brushed aside in the rush to “implement AI.”

SEO and marketing work regularly involves sensitive information—sales data, customer feedback, conversion rates, pricing strategies, internal documents, and product roadmaps. Paste that into an AI tool without thinking, and you create risk.

Sometimes that risk is contractual. Sometimes it’s regulatory. Sometimes it’s reputational.

Even if your AI tools are configured securely, you still need an internal policy. Nothing fancy. Just clear rules on what can and can’t be shared, who can approve it, and how outputs are reviewed.

If you’re building authority as a brand, the last thing you want is to lose trust because you treated sensitive information casually in the name of efficiency.

The window of opportunity, and why it will not last forever

Right now, there’s a window. Most businesses are still learning how to use AI well. That gives brands that move carefully a real edge.

That window won’t stay open.

In a few years, the market will be flooded with AI-generated content and AI-assisted services. The tools will be cheaper and more accessible. The baseline will rise.

When that happens, “we use AI” won’t be a differentiator anymore. It’ll sound like saying, “we use email.”

The real differentiator will be how you use it.

Do you use AI to churn out more of the same?

Or do you use it to buy back time so you can create things others can’t?

That’s the opportunity. AI can strip out the grunt work and give you time back. What you do with that time is where authority is built.

Where SEO fits in: less doing, more directing

I suspect the SEO role is shifting.

Not away from execution entirely, but away from being valued purely for output. When a tool can generate a content draft, the value shifts to the person who can judge whether it’s the right draft — for the right audience, with the right angle, on the right page, at the right time.

In other words, the SEO becomes a director, not just a doer.

That looks like this:

  • Knowing which content is worth creating—and which isn’t.
  • Understanding the user journey and where search fits into it.
  • Building content strategies anchored in real business value.
  • Designing workflows that protect quality while increasing speed.
  • Helping teams use AI responsibly without removing human judgment.

If you’re trying to build authority, this shift is good news. It rewards expertise and judgment. It rewards people who can see the bigger picture and make decisions that go beyond “more content.”

The upside: take away the grunt work, keep the thinking

AI is excellent at certain jobs. And if we’re honest, a lot of SEO work is repetitive and draining. That’s where AI shines.

AI can help you:

  • Summarize and cluster keyword research faster.
  • Create first drafts of meta descriptions that a human then edits properly.
  • Turn messy notes into a structure you can actually work with.
  • Generate alternative title options quickly so you can choose the strongest one.
  • Create scripts for short videos or webinars from existing material.
  • Analyze patterns in performance data and flag areas worth investigating.
  • Speed up technical tasks like regex, formulas, documentation, and QA checklists.

This is the sweet spot. Use AI to reduce friction and strip out the boring work. Then spend your time on the things that actually create differentiation.

In my experience, the best use of AI in SEO isn’t replacing humans. It’s giving humans more time to do the human parts properly.

Personalization: The dream and the risk

There’s a lot of talk about personalized results. A future where each person gets answers tailored to their preferences, context, history, and intent.

That future may arrive. In some ways, it’s already here. Search results and recommendations aren’t neutral. They’re shaped by behavior and patterns.

Personalization could be great for users. It also raises the bar for brands.

If every user sees a slightly different answer, it gets harder to compete with generic content. Generic content fades into the background because it isn’t specific enough to be chosen.

That brings us back to the same truth: unique value wins. Real expertise wins. Original experience wins. Trust wins.

Automation can help you scale personalization — but only if the thinking behind it is solid. Automate personalization badly, and all you get is faster irrelevance.

A practical way to decide what should be automated

So how do we move from “can AI do this?” to “should AI do this?”

The better approach is to decide what must stay human, what can be assisted, and what can be automated safely.

These are the questions I use when making that call:

  • What happens if this is wrong? If the cost of being wrong is high, a human needs to own it.
  • Is this customer-facing? The more visible it is, the more it should sound like you and reflect your judgment.
  • Does this require empathy or nuance? If yes, automate less.
  • Does this require your unique perspective? If yes, automate less.
  • Is this reversible? If it’s easy to undo, you can afford to experiment.
  • Does it involve sensitive information? If yes, tighten control.
  • Will automation make us look like everyone else? If yes, be cautious. You may be trading speed for differentiation.

These questions are simple, but they lead to far better decisions than, “the tool can do it, so let’s do it.”

What I would and would not automate in SEO

To make this practical, here’s where I’d draw the line for most teams.

I’d happily automate or heavily assist:

  • Early-stage research, like summarizing competitors, clustering topics, and extracting themes from customer feedback.
  • Drafting tasks that a human will edit, such as meta descriptions, outlines, and first drafts of support content.
  • Repetitive admin work, including documentation, tagging, and reporting templates.
  • Technical helper tasks, like formulas, regex, and scripts—as long as a human reviews the output.

I would not fully automate:

  • Strategy: Deciding what matters and why.
  • Positioning: The angle that gives your brand a clear point of view.
  • Final customer-facing messaging: Especially anything that represents your voice and level of care.
  • Claims that require evidence: If you can’t prove it, don’t publish it.
  • Client relationships: The conversations, reassurance, and trust-building that keep people with you.

If you automate those, you may increase output, but you’ll often decrease loyalty. And loyalty is a form of authority.

The real risk is not AI. It is thoughtlessness.

The biggest risk isn’t that AI will take your job. It’s that you use it in a way that makes you replaceable.

If your brand turns into a machine that churns out generic output, it becomes hard to care.

  • Hard for search engines to prioritize.
  • Hard for language models to cite.
  • Hard for clients to justify paying for.

If you want to build authority, you have to protect what makes you different. Your judgment. Your experience. Your voice. Your evidence. Your relationships.

AI can help if you use it to create space for better thinking. It can hurt if you use it to avoid thinking altogether.

Human involvement

It’s easy to get excited about AI doing everything. Saving on headcount. Producing output 24/7. Removing bottlenecks.

But the more important question is what you lose when you remove too much human involvement. Do you lose:

  • Differentiation?
  • Trust?
  • The ability to think critically?
  • The relationships that keep clients loyal?

For most of us, the goal isn’t more marketing. The goal is marketing that works — for people we actually want to work with — in a way we can be proud of.

So yes, ask, “Can AI do this?” It’s a useful question.

Then ask, “Should AI do this?” That’s the one that protects your authority.

And if you’re unsure, start small. Automate the grunt work. Keep the thinking. Keep the voice. Keep the care.

That’s how you get the best of AI without automating away what makes you valuable.

How first-party data drives better outcomes in AI-powered advertising

As AI-driven bidding and automation transform paid media, first-party data has become the most powerful lever advertisers control.

In this conversation with Search Engine Land, Julie Warneke, founder and CEO of Found Search Marketing, explained why first-party data now underpins profitable advertising — no matter how Google’s position on third-party cookies evolves.

What first-party data really is — and isn’t

First-party data is customer information that an advertiser owns directly, usually housed in a CRM. It includes:

  • Lead details.
  • Purchase history.
  • Revenue.
  • Customer value collected through websites, forms, or physical locations.

It doesn’t include platform-owned or browser-based data that advertisers can’t fully control.

Why first-party data matters more than ever

Digital advertising has moved from paying for impressions, to clicks, to actions — and now to outcomes. The real goal is no longer conversions alone, but profitable conversions, according to Warneke.

As AI systems process far more signals than humans can handle, advertisers who supply high-quality customer data gain a clear advantage.

CPCs may rise — but profitability can too

Rising cost-per-clicks are a fact of paid media. First-party data doesn’t always reduce CPCs, but it improves what matters more: conversion quality, revenue, and return on ad spend.

By optimizing for downstream business outcomes instead of surface-level metrics, advertisers can justify higher costs with stronger results.

How first-party data improves ROAS

When advertisers feed Google data tied to revenue and customer value, AI bidding systems can prioritize users who resemble high-value customers — often using signals far beyond demographics or geography.

The result is traffic that converts better, even if advertisers never see or control the underlying signals.

Performance Max leads the way

Among campaign types, Performance Max (PMax) currently benefits the most from first-party data activation.

PMax performs best when advertisers move away from manual optimizations and instead focus on supplying accurate, consistent data, then let the system learn, Warneke noted.

SMBs aren’t locked out — but they need the right setup

Small and mid-sized businesses aren’t disadvantaged by limited first-party data volume. Warneke shared examples of success with customer lists as small as 100 records.

The real hurdle for SMBs is infrastructure — specifically proper tracking, consent management, and reliable data pipelines.

The biggest mistakes advertisers are making

Two issues stand out:

  • Weak data capture: Many brands still depend on browser-side tracking, which increasingly fails — especially on iOS.
  • Broken feedback loops: Others upload CRM data sporadically instead of building continuous data flows that let AI systems learn and improve over time.

What marketers should do next

Warneke’s advice: Step back and audit how data is captured, stored, and sent back to platforms, then improve it incrementally.

There’s no need to overhaul everything at once or risk the entire budget. Even testing with 5–7% of spend can create a learning roadmap that delivers long-term gains.

Bottom line

AI optimizes toward the signals it receives — good or bad. Advertisers who own and refine their first-party data can shape outcomes in their favor, while those who don’t risk being optimized into inefficiency.

💾

Learn why first-party data plays an increasingly important role in how automated ad campaigns are optimized and measured.

Google Ads tightens access control with multi-party approval

How to tell if Google Ads automation helps or hurts your campaigns

Google Ads introduced multi-party approval, a security feature that requires a second administrator to approve high-risk account actions. These actions include adding or removing users and changing user roles.

Why we care. As ad accounts grow in size and value, access control becomes a serious risk. One unauthorized, malicious, or accidental change can disrupt campaigns, permissions, or billing in minutes. Multi-party approval reduces that risk by requiring a second admin to approve high-impact actions. It adds strong protection without slowing daily work. For agencies and large teams, it prevents costly mistakes and significantly improves account security.

How it works. When an admin initiates a sensitive change, Google Ads automatically creates an approval request. Other eligible admins receive an in-product notification. One of them must approve or deny the request within 20 days. If no one responds, the request expires, and the change is blocked.

Status tracking. Each request is clearly labeled as Complete, Denied, or Expired. This makes it easy to see what was approved and what didn’t go through.

Where to find it. You can view and manage approval requests from Access and security within the Admin menu.

The bigger picture. The update reflects growing concern around account security, especially for agencies and large advertisers managing multiple users, partners, and permissions. With advertisers recently reporting costly hacks, this is a welcome update.

The Google Ads help doc. About Multi-party approval for Google Ads

In Google Ads automation, everything is a signal in 2026

In Google Ads automation, everything is a signal in 2026

In 2015, PPC was a game of direct control. You told Google exactly which keywords to target, set manual bids at the keyword level, and capped spend with a daily budget. If you were good with spreadsheets and understood match types, you could build and manage 30,000-keyword accounts all day long.

Those days are gone.

In 2026, platform automation is no longer a helpful assistant. It’s the primary driver of performance. Fighting that reality is a losing battle. 

Automation has leveled the playing field and, in many cases, given PPC marketers back their time. But staying effective now requires a different skill set: understanding how automated systems learn and how your data shapes their decisions.

This article breaks down how signals actually work inside Google Ads, how to identify and protect high-quality signals, and how to prevent automation from drifting into the wrong pockets of performance.

Automation runs on signals, not settings

Google’s automation isn’t a black box where you drop in a budget and hope for the best. It’s a learning system that gets smarter based on the signals you provide. 

Feed it strong, accurate signals, and it will outperform any manual approach.

Feed it poor or misleading data, and it will efficiently automate failure.

That’s the real dividing line in modern PPC. AI and automation run on signals. If a system can observe, measure, or infer something, it can use it to guide bidding and targeting.

Google’s official documentation still frames “audience signals” primarily as the segments advertisers manually add to products like Performance Max or Demand Gen. 

That definition isn’t wrong, but it’s incomplete. It reflects a legacy, surface-level view of inputs and not how automation actually learns at scale.

Dig deeper: Google Ads PMax: The truth about audience signals and search themes

What actually qualifies as a signal?

In practice, every element inside a Google Ads account functions as a signal. 

Structure, assets, budgets, pacing, conversion quality, landing page behavior, feed health, and real-time query patterns all shape how the AI interprets intent and decides where your money goes. 

Nothing is neutral. Everything contributes to the model’s understanding of who you want, who you don’t, and what outcomes you value.

So when we talk about “signals,” we’re not just talking about first-party data or demographic targeting. 

We’re talking about the full ecosystem of behavioral, structural, and quality indicators that guide the algorithm’s decision-making.

Here’s what actually matters:

  • Conversion actions and values: These are 100% necessary. They tell Google Ads what defines success for your specific business and which outcomes carry the most weight for your bottom line.
  • Keyword signals: These indicate search intent. Based on research shared by Brad Geddes at a recent Paid Search Association webinar, even “low-volume” keywords serve as vital signals. They help the system understand the semantic neighborhood of your target audience.
  • Ad creative signals: This goes beyond RSA word choice. I believe the platform now analyzes the environment within your images. If you show a luxury kitchen, the algorithm identifies those visual cues to find high-end customers. I base this hypothesis on my experience running a YouTube channel. I’ve watched how the algorithm serves content based on visual environments, not just metadata.
  • Landing page signals: Beyond copy, elements like color palettes, imagery, and engagement metrics signal how well your destination aligns with the user’s initial intent. This creates a feedback loop that tells Google whether the promise of the ad was kept.
  • Bid strategies and budgets: Your bidding strategy is another core signal for the AI. It tells the system whether you’re prioritizing efficiency, volume, or raw profit. Your budget signals your level of market commitment. It tells the system how much permission it has to explore and test.

In 2026, we’ve moved beyond the daily cap mindset. With the expansion of campaign total budgets to Search and Shopping, we are now signaling a total commitment window to Google.

In the announcement, UK retailer Escentual.com used this approach to signal a fixed promotional budget, resulting in a 16% traffic lift because the AI was given permission to pace spend based on real-time demand rather than arbitrary 24-hour cycles.

All of these elements function as signals because they actively shape the ad account’s learning environment.

Anything the ad platform can observe, measure, or infer becomes part of how it predicts intent, evaluates quality, and allocates budget. 

If a component influences who sees your ads, how they behave, or what outcomes the algorithm optimizes toward, it functions as a signal.

The auction-time reality: Finding the pockets

To understand why signal quality has become critical, you need to understand what’s actually happening every time someone searches.

Google’s auction-time bidding doesn’t set one bid for “mobile users in New York.” 

It calculates a unique bid for every single auction based on billions of signal combinations at that precise millisecond. This considers the user, not simply the keyword.

We are no longer looking for “black-and-white” performance.

We are finding pockets of performance and users who are predicted to take the outcomes we define as our goals in the platform.

The AI evaluates the specific intersection of a user on iOS 17, using Chrome, in London, at 8 p.m., who previously visited your pricing page. 

Because the bidding algorithm cross-references these attributes, it generates a precise bid. This level of granularity is impossible for humans to replicate. 

But this is also the “garbage in, garbage out” reality. Without quality signals, the system is forced to guess.

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

Get the newsletter search marketers rely on.


The signal hierarchy: What Google actually listens to

If every element in a Google Ads account functions as a signal, we also have to acknowledge that not all signals carry equal weight.

Some signals shape the core of the model’s learning. Others simply refine it.

Based on my experience managing accounts spending six and seven figures monthly, this is the hierarchy that actually matters.

Conversion signals reign supreme

Your tracking is the most important data point. The algorithm needs a baseline of 30 to 50 conversions per month to recognize patterns. For B2B advertisers, this often requires shifting from high-funnel form fills to down-funnel CRM data.

As Andrea Cruz noted in her deep dive on Performance Max for B2B, optimizing for a “qualified lead” or “appointment booked” is the only way to ensure the AI doesn’t just chase cheap, irrelevant clicks.

Enhanced conversions and first-party data

We are witnessing a “death by a thousand cuts,” where browser restrictions from Safari and Firefox, coupled with aggressive global regulations, have dismantled the third-party cookie. 

Without enhanced conversions or server-side tracking, you are essentially flying blind, because the invisible trackers of the past are being replaced by a model where data must be earned through transparent value exchanges.

First-party audience signals

Your customer lists tell Google, “Here is who converted. Now go find more people like this.” 

Quality trumps quantity here. A stale or tiny list won’t be as effective as a list that is updated in real time.

Custom segments provide context

Using keywords and URLs to build segments creates a digital footprint of your ideal customer. 

This is especially critical in niche industries where Google’s prebuilt audiences are too broad or too generic.

These segments help the system understand the neighborhood your best prospects live in online.

To simplify this hierarchy, I’ve mapped out the most common signals used in 2026 by their actual weight in the bidding engine:

Signal categorySpecific input
(The “what”)
Weight/impactWhy it matters in 2026
Primary (Truth)Offline conversion imports (CRM)CriticalTrains the AI on profit, not just “leads.”
Primary (Truth)Value-based bidding (tROAS)CriticalSignals which products actually drive margin.
Secondary (Context)First-party customer match listsHighProvides a “Seed Audience” for the AI to model.
Secondary (Context)Visual environment (images/video)HighAI scans images to infer user “lifestyle” and price tier.
Tertiary (Intent)Low-volume/long-tail keywordsMediumDefines the “semantic neighborhood” of the search.
Tertiary (Intent)Landing page color and speedMediumSignals trust and relevance feedback loops.
Pollutant (Noise)“Soft” conversions (scrolls/clicks)NegativeDilutes intent. Trains AI to find “cheap clickers.”

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Beware of signal pollution

Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google’s AI uses to learn. 

It’s what happens when the system receives signals that don’t accurately represent your ideal client, your real conversion quality, or the true intent you want to attract in your ad campaigns.

Signal pollution doesn’t just “confuse” the bidding algorithm. It actively trains it in the wrong direction. 

It dilutes your high-value signals, expands your reach into low-intent audiences, and forces the model to optimize toward outcomes you don’t actually want.

Common sources include:

  • Bad conversion data, including junk leads, unqualified form fills, and misfires.
  • Overly broad structures that blend high- and low-intent traffic.
  • Creative that attracts the wrong people.
  • Landing page behavior that signals low relevance or low trust.
  • Budget or pacing patterns that imply you’re willing to pay for volume over quality.
  • Feed issues that distort product relevance.
  • Audience segments that don’t match your real buyer.

These sources create the initial pollution. But when marketers try to compensate for underperformance by feeding the machine more data, the root cause never gets addressed. 

That’s when soft conversions like scrolls or downloads get added as primary signals, and none of them correlate to revenue.

Like humans, algorithms focus on the metrics they are fed.

If you mix soft signals with high-intent revenue data, you dilute the profile of your ideal customer. 

You end up winning thousands of cheap, low-value auctions that look great in a report but fail to move the needle on the P&L. 

Your job is to be the gatekeeper, ensuring only the most profitable signals reach the bidding engine.

When signal pollution takes hold, the algorithm doesn’t just underperform. The ads start drifting toward the wrong users, and performance begins to decline. 

Before you can build a strong signal strategy, you have to understand how to spot that drift early and correct it before it compounds.

How to detect and correct algorithm drift

Algorithm drift happens when Google’s automation starts optimizing toward the wrong outcomes because the signals it’s receiving no longer match your real advertising goals. 

Drift doesn’t show up as a dramatic crash. It shows up as a slow shift in who you reach, what queries you win, and which conversions the system prioritizes. It looks like a gradual deterioration of lead quality.

To stay in control, you need a simple way to spot drift early and correct it before the machine locks in the wrong pattern.

Early warning signs of drift include:

  • A sudden rise in cheap conversions that don’t correlate with revenue.
  • A shift in search terms toward lower-intent or irrelevant queries.
  • A drop in average order value or lead quality.
  • A spike in new-user volume with no matching lift in sales.
  • A campaign that looks healthy in-platform but feels wrong in the CRM or P&L.

These are all indicators that the system is optimizing toward the wrong signals.

To correct drift without resetting learning:

  • Tighten your conversion signals: Remove soft conversions, misfires, or anything that doesn’t map to revenue. The machine can’t unlearn bad data, but you can stop feeding it.
  • Reinforce the right audience patterns:  Upload fresh customer lists, refresh custom segments, and remove stale data. Drift often comes from outdated or diluted audience signals.
  • Adjust structure to isolate intent:  If a campaign blends high- and low-intent traffic, split it. Give the ad platform a cleaner environment to relearn the right patterns.
  • Refresh creative to repel the wrong users: Creative is a signal. If the wrong people are clicking, your ads are attracting them. Update imagery, language, and value props to realign intent.
  • Let the system stabilize before making another change: After a correction, give the campaign 5-10 days to settle. Overcorrecting creates more drift.

Your job isn’t to fight automation in Google Ads, it’s to guide it. 

Drift happens when the machine is left unsupervised with weak or conflicting signals. Strong signal hygiene keeps the system aligned with your real business outcomes.

Once you can detect drift and correct it quickly, you’re finally in a position to build a signal strategy that compounds over time instead of constantly resetting.

The next step is structuring your ad account so every signal reinforces the outcomes you actually want.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Building a strategy that actually works in 2026 with signals

If you want to build a signal strategy that becomes a competitive advantage, you have to start with the foundations.

For lead gen

Implement offline conversion imports. The difference between optimizing for a “form fill” and a “$50K closed deal” is the difference between wasting budget and growing a business. 

When “journey-aware bidding” eventually rolls out, it will be a game-changer because we can feed more data about the individual steps of a sale.

For ecommerce

Use value-based bidding. Don’t just count conversions. Differentiate between a customer buying a $20 accessory and one buying a $500 hero product.

Segment your data

Don’t just dump everyone into one list. A list of 5,000 recent purchasers is worth far more than 50,000 people who visited your homepage two years ago. 

Stale data hurts performance by teaching the algorithm to find people who matched your business 18 months ago, not today.

Separate brand and nonbrand campaigns

Brand traffic carries radically different intent and conversion rates than nonbrand. 

Mixing these campaigns forces the algorithm to average two incompatible behaviors, which muddies your signals and inflates your ROAS expectations. 

Brand should be isolated so it doesn’t subsidize poor nonbrand performance or distort bidding decisions in the ad platform.

Don’t mix high-ticket and low-ticket products under one ROAS target

A $600 product and a $20 product do not behave the same in auction-time bidding. 

When you put them in the same campaign with a single 4x ROAS target, the algorithm will get confused. 

This trains the system away from your hero products and toward low-value volume.

Centralize campaigns for data density, but only when the data belongs together

Google’s automation performs best when it has enough data to be consistent and high-quality data to recognize patterns. That means fewer, stronger campaigns are better as long as the signals inside them are aligned. 

Centralize campaigns when products share similar price points, margins, audiences, and intent. Decentralize campaigns when mixing them would pollute the signal pool.

The competitive advantage of 2026

When everyone has access to the same automation, the only real advantage left is the quality of the signals you feed it. 

Your job is to protect those signals, diagnose pollution early, and correct drift before the system locks onto the wrong patterns.

Once you build a deliberate signal strategy, Google’s automation stops being a constraint and becomes leverage. You stay in the loop, and the machine does the heavy lifting.

Anthropic says Claude will remain ad-free as ChatGPT tests ads

AI ad free vs. ad supported

Anthropic is drawing the line against advertising in AI chatbots. Claude will remain ad-free, the company said, even as rival AI platforms experiment with sponsored messages and branded placements inside conversations.

  • Ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude (for work, problem-solving, and sensitive topics), Anthropic said in a new blog post.

Why we care. Anthropic’s position removes Claude, and its user base of 30 million, from the AI advertising equation. Brands shouldn’t expect sponsored links, conversations, or responses inside Claude. Meanwhile, ChatGPT is about to give brands the opportunity to reach an estimated 800 million weekly users.

What’s happening. AI conversations are fundamentally different from search results or social feeds, where users expect a mix of organic and paid content, Anthropic said:

  • Many Claude interactions involve personal issues, complex technical work, or high-stakes thinking. Dropping ads into those moments would feel intrusive and could quietly influence responses in ways users can’t easily detect.
  • Ad incentives tend to expand over time, gradually optimizing for engagement rather than genuine usefulness.

Incentives matter. This is a business-model decision, not just a product preference, Anthropic said:

  • An ad-free assistant can focus entirely on what helps the user — even if that means a short exchange or no follow-up at all.
  • An ad-supported model, by contrast, creates pressure to surface monetizable moments or keep users engaged longer than necessary.
  • Once ads enter the system, users may start questioning whether recommendations are driven by help or by commerce.

Anthropic isn’t rejecting commerce. Claude will still help users research, compare, and buy products when they ask. The company is also exploring “agentic commerce,” where the AI completes tasks like bookings or purchases on a user’s behalf.

  • Commerce should be triggered by the user, not by advertisers, Anthropic said.
  • The same rule applies to third-party integrations like Figma or Asana. These tools will remain user-directed, not sponsored.

Super Bowl ad. Anthropic is making the argument publicly and aggressively. In a Super Bowl debut, the company mocked intrusive AI advertising by inserting fake product pitches into personal conversations. The ad closed with a clear message: “Ads are coming to AI. But not to Claude.”

  • The campaign appears to be a direct shot at OpenAI, which has announced plans to introduce ads into ChatGPT.
  • Here’s the ad:

Claude’s blog post. Claude is a space to think

OpenAI responds. OpenAI CEO Sam Altman posted some thoughts on X. Some of the highlights:

  • “…I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won’t do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that.
  • “I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it.
  • “Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.
  • “We will continue to work hard to make even more intelligence available for lower and lower prices to our users.”

💾

Anthropic argues ads inside AI chats would erode trust, warp incentives, and clash with how people actually use assistants like Claude.

DOJ and states appeal Google search antitrust remedies ruling

Google Search court

The U.S. Justice Department and a coalition of states plan to appeal a federal judge’s remedies ruling in the Google search antitrust case.

The appeal challenges a decision that found Google illegally monopolized search but stopped short of imposing major structural changes, such as forcing a divestiture of Chrome or banning default search deals outright.

What’s happening. The DOJ and state attorneys general filed notices of appeal yesterday, challenging U.S. District Judge Amit Mehta’s September remedies ruling, Bloomberg and Reuters reported.

Why we care. The appeal means we still don’t know how much Google will keep controlling where search gets placed. And that control basically decides who wins traffic. If stricter fixes happen, it could change default search settings, open the door to rival search engines, and shift how people use search across devices.

Yes, but. The DOJ and states haven’t detailed their legal arguments. Court filings didn’t specify which parts of the ruling they will challenge, though attention is expected to focus on Chrome and Google’s default search deal with Apple.

What to watch. The U.S. Court of Appeals for the D.C. Circuit is expected to hear the case later this year. For now, it’s business as usual for Google — though its most important contracts now face annual review, and the risk of tougher remedies remains firmly on the table.

What they’re saying. David Segal, Yelp’s vice president of public policy, welcomed the appeal. In a statement shared with Search Engine Land, Yelp said the trial court’s remedies do not go far enough to restore real competition in search:

  • “Unfortunately, the measures put forth in the trial court’s remedy decision are unlikely to restore competition — for instance, it allows for Google to continue to pay third parties for default placement in browsers and devices, which was the primary mechanism by which Google unlawfully foreclosed competition to begin with.
  • “Internet users, online advertisers and others who rely on and seek to compete in the industry deserve a level playing field with more, higher quality, and fairer search options — and the need for a more competitive space is all the more clear as Google seeks to leverage its vast power over the web, especially search indexing and ranking, to come to dominate the GenAI space.”

How Google Ads quality score really affects your CPCs

Google quality score

If your CPCs keep climbing, the cause may not be your bid strategy, your budget, or even your competitors.

You might be suffering from low ad quality. 

Let’s break down the most foundational — and most misunderstood — metric in your Google Ads account. If you want to stop overpaying Google and start winning auctions on merit, you need to understand how the 1-to-10 Quality Score actually works.

The difference between Quality Score, Ad Strength, and Optimization Score

Before we dive in, let’s clear up the confusion. Google shows a lot of “scores” and “diagnostics,” and you can safely ignore most of them. Quality Score is the exception.

  • Ad strength is an ad-level diagnostic. It checks whether your responsive ad follows best practices, like having enough headlines and descriptions. It has zero impact on auction performance.
  • Optimization score is a sales metric. It measures how many Google recommendations you’ve reviewed. It does not reflect real campaign performance.
  • Quality Score is different. It’s foundational. This keyword-level diagnostic summarizes the quality of your ads. Along with your bid, it determines Ad Rank. Ad Rank determines whether your ad appears at all, where it appears on the SERP, and how much you pay per click.
    • The formula is simple: Ad Rank = price × quality. The 1–10 score you see is only a summary, but it reflects the real-time quality calculation Google runs on every single search.

Setting up your dashboard: How to find your Quality Score

You can’t fix what you can’t see. To get started, go to your Keywords report in Google Ads and add these four columns:

  • Quality Score
  • Exp. CTR
  • Ad Relevance
  • Landing Page Exp.

When you analyze Quality Score, don’t judge keywords in isolation. You’ll drive yourself crazy. Look for patterns at the ad group level instead.

If most keywords have a Quality Score of 7 or higher, you’re in good shape. If most are at 5 or below, that’s your cue to roll up your sleeves and improve ad quality.

The three core components of Quality Score and how to fix them

1. Ad Relevance: The ‘message match’

This is the only part of Quality Score fully within your control. It asks one simple question:

  • Does the keyword match the ad and the landing page?

If your ad relevance is generally “Below average,” the fastest fix is Dynamic Keyword Insertion. It automatically inserts your keywords into the ad text. If you prefer a manual approach, make sure the keywords in the ad group actually appear in both the ad copy and the landing page.

2. Landing Page Experience: The “Delivery”

When Google sends users to your site, do they find what they’re looking for? Or do they bounce after two seconds and head back to Google for a better result?

If your landing page experience score is low, start with the PageSpeed Insights tool. A “Below average” rating often points to slow load times, a poor mobile experience, generic content, weak navigation, or all of the above.

3. Expected CTR: The “Popularity Contest”

Google only makes money when users click, so it favors ads people are most likely to click.

If your expected CTR is lagging, start with competitive research:

  • Check Auction Insights to see who you’re competing against.
  • A “Below average” expected CTR means their ads are earning higher click-through rates than yours.

Next, visit the Google Ads Transparency Center and review your competitors’ ads.

  • Are their offers more enticing?
  • Is their copy more clickable?
  • Borrow what works and update your own ads.

If your ads are great but CTR is still low, review the Search terms report. You may be showing for irrelevant queries, which explains why users aren’t clicking on an otherwise awesome ad.

What’s a realistic Quality Score goal?

I’ll be honest: chasing a 10/10 Quality Score everywhere is a waste of time. It’s unrealistic and usually unnecessary.

Instead, do a quick check-up every few months. Find one or two ad groups with lower Quality Scores, identify the most “Below Average” component, and fix that first.

Improving ad quality takes more effort than raising budgets or bids. But it pays off with more clicks at the same — or even lower — cost.

This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.

Google may be cracking down on self-promotional ‘best of’ listicles

Google hammers listicles

Google may finally be starting to address a popular SEO and AI visibility “tactic”: self-promotional “best of” listicles. That’s according to new research by Lily Ray, vice president, SEO strategy and research at Amsive.

Across several SaaS brands hit hard in January, a pattern emerged. Many relied heavily on review-style content that ranked their own product as the No. 1 “best” in its category, often updated with the current year to trigger recency signals.

What’s happening. After the December 2025 core update, Google search results showed increased volatility throughout January, according to Barry Schwartz. Google hasn’t announced or confirmed any updates this year, but the timing aligns with steep visibility losses at several well-known SaaS and B2B brands. According to Ray:

  • In multiple cases, organic visibility dropped 30% to 50% within weeks. The losses were not domain-wide. They were concentrated in blog, guide, and tutorial subfolders.
  • Those sections often contained dozens or hundreds of self-promotional listicles targeting “best” queries. In most cases, the publisher ranked itself first. Many of the articles were lightly refreshed with “2026” in the title, with little evidence of meaningful updates.
  • “Presumably, these drops in Google organic results will also impact visibility across other LLMs that leverage Google’s search results, which extends beyond Google’s ecosystem of AI search products like Gemini and AI Mode [and AI Overviews], but is also likely to include ChatGPT,” Ray wrote.

Why we care. Self-promotional listicles have been a shortcut for influencing rankings and AI-generated answers. If Google is now reevaluating how it treats this content, any strategies built around “best” queries are in danger of imploding.

The gray area. Ranking yourself as the “best” without independent testing, clear methodology, or third-party validation has been considered (by most) to be a sketchy SEO tactic. It isn’t explicitly banned, but it definitely conflicts with Google’s guidance on reviews and trust.

  • Google has repeatedly said that high-quality reviews should show first-hand experience, originality, and evidence of evaluation. Self-promotional listicles often fall short, especially when bias is not disclosed.

Yes, but. Self-promotional listicles likely weren’t the only factor impacting organic visibility. Many affected sites also showed signs of rapid content scaling, automation, aggressive year-based refreshes, and other tactics tied to algorithmic risk.

  • That said, the consistency of self-ranking “best” content among the hardest-hit sites suggests this signal could now carry more weight, especially when used at scale.

What to watch. Whether self-promotional listicles earn citations and organic visibility. Google rarely applies changes evenly or instantly.

  • If this volatility reflects updates to Google’s reviews system, the direction is clear. Content designed primarily to influence rankings, rather than to provide credible and independent evaluation, is becoming a liability.
  • For brands chasing visibility in search and AI, the lesson is familiar: SEO shortcuts work until they don’t.

The analysis. Is Google Finally Cracking Down on Self-Promotional Listicles?

What higher ed data shows about SEO visibility and AI search

What higher ed data shows about SEO visibility and AI search

AI search hasn’t killed SEO.

Now you have to win twice: the ranking and the citation.

Google searches for almost anything today, and there’s a good chance you’ll see an AI Overview before the organic results, sometimes even before the ads. 

That summary frames the query, shortlists sources, and shapes which brands get considered.

Google AI Overviews - How to measure lead quality

AI Overviews now appear for about 21% of all keywords, according to Ahrefs. And 99.9% are triggered by informational intent.

Search rankings still matter. But AI summaries increasingly determine who wins early consideration.

Here’s what we’re seeing: brands aren’t losing visibility because they dropped from position three to seven. They’re losing it because they were never cited in the AI answer at all.

This article draws on research conducted by Search Influence and the online and professional education association UPCEA, which examined how people use AI-assisted search and how organizations are adapting. (Disclosure: I am the CEO at Search Influence) 

Key takeaways

  • AI citations are becoming a trust signal: Being cited by AI influences credibility and early consideration – before users ever compare sources directly.
  • AI visibility is cumulative: AI systems pull from your website, YouTube, LinkedIn, and third-party publishers to assemble answers. Your URL isn’t the only thing that matters.
  • Authority doesn’t guarantee inclusion: Even established brands get sidelined when their content doesn’t match how users ask questions.
  • Most organizations know AI search matters but lack a plan: The gap isn’t awareness – it’s ownership, prioritization, and repeatable process.
  • Content structure affects whether you get cited: Pages built for retrieval, comparison, and decision-making outperform narrative or brand-led content.

Examining both sides of the search equation

To understand what’s happening, we need to look at two sides of the same equation – how people are searching today and how organizations are responding (or aren’t).

AI Search in Higher Education: How Prospects Search in 2025” surveyed 760 prospective adult learners in March 2025. It examined:

  • Where online discovery happens.
  • How AI tools are used alongside traditional search.
  • Which sources people trust during early research.

While the study focused on professional and continuing education, these behaviors mirror what we’re seeing across industries: more AI-assisted discovery, earlier opinion formation, and trust signals shifting.

A separate snap poll of 30 UPCEA member institutions in October 2025 looked at the other side:

  • AI search strategy adoption.
  • Barriers slowing progress.
  • How visibility in AI-generated results gets tracked.

Together, these datasets show a widening gap between how people search and how organizations have adapted.

So what does the data actually tell us?

The search patterns worth paying attention to

The research highlights several search behaviors that consistently influence how people discover and evaluate options today.

AI tools and AI summaries are influencing trust early

The data makes one thing clear: AI-driven search has moved from the margins into the mainstream.

  • 50% of prospective students use AI tools at least weekly.
  • 79% read Google’s AI Overviews when they appear.
  • 1 in 3 trust AI tools as a source for program research.
  • 56% are more likely to trust a brand cited by AI.

Trust is forming earlier now, often before users compare sources directly.

If you’ve been putting off your AI search strategy because “people don’t trust AI,” the data says otherwise. AI citations are becoming a credibility signal – a trust shortcut before deeper research begins.

Search behavior is diversified

Search doesn’t happen in one place or follow one clean path anymore.

  • 84% of prospective students use traditional search engines during research.
  • 61% use YouTube.
  • 50% use AI tools.

These behaviors aren’t sequential. Users move between surfaces, carrying context with them.

What they see in an AI summary influences how they read a search result. A YouTube video can establish trust before a website ever earns a click.

This is where many strategies fall out of sync. Teams optimize one channel at a time – usually their website – and treat everything else as optional.

But AI search engines pull from everywhere your brand has a presence:

  • Your website.
  • Your YouTube channel.
  • Your LinkedIn content.
  • Third-party and publisher sites.

Your AI credibility is cumulative. It’s built anywhere your brand shows up, not just where you own the URL.

Search engines and brand-owned websites still matter

The rise of AI search doesn’t mean the end of traditional search. It raises the bar for it.

Even as AI summaries reshape early trust, people still rely heavily on first-party sources and organic results when they evaluate options:

  • 63% rely on brand-owned websites during research.
  • 77% trust university-owned websites more than other sources.
  • 82% are more likely to consider options that appear on the first page of search results.

AI engines prioritize content that search engines can already crawl, interpret, and trust.

If your core content isn’t clearly structured, accessible, and eligible to rank in traditional search, it’s far less likely to be pulled into AI-generated answers.

Dig deeper: Your website still matters in the age of AI

Get the newsletter search marketers rely on.


Organizational readiness lags behind

Most organizations recognize that AI search is reshaping discovery. Far fewer have translated that awareness into coordinated action.

AI search strategy adoption remains uneven

Most institutions sit somewhere between curiosity and commitment:

  • 60% are in the early stages of exploring AI search.
  • 30% have a formal AI search strategy in place.
  • 10% haven’t started or believe AI search will have limited impact.

The majority of teams know something important is happening. But ownership, process, and prioritization remain unresolved.

What’s slowing progress

When asked what’s holding them back, institutions cited execution constraints:

  • 70% report limited bandwidth or competing priorities.
  • 37% report a lack of in-house expertise or training.
  • 27% report unclear ROI, leadership buy-in, or uncertainty around how AI search works.

For many organizations, AI search has entered the roadmap conversation. It just hasn’t earned consistent operational focus yet. (Sound familiar?)

Dig deeper: Why most SEO failures are organizational, not technical

What teams say they’re prioritizing

When teams do take action, their priorities cluster around two themes:

  • 59% focus on the accuracy of AI-generated information about their offerings.
  • 48% focus on improving visibility and competitive positioning.

Those goals are linked. Clear, structured information makes it easier for AI systems to represent a brand. Visibility follows clarity. When that clarity is missing, AI fills in the blanks using third-party sources and competitor content.

Tracking AI visibility remains inconsistent

AI visibility tracking varies widely:

  • 57% know their institution appears in AI-generated answers.
  • 27% have seen their brand referenced occasionally but don’t actively monitor it.
  • 13% are unsure whether they appear in AI-generated responses at all.

Among teams that do track AI visibility:

  • 64% use dedicated tools or formal tracking methods.
  • 29% rely on informal checks or don’t track consistently.

This creates a familiar blind spot. Teams feel the impact of AI search anecdotally but lack consistent visibility into where, how, and why their brand appears.

UPCEA snap poll - October 2025

Dig deeper: How to track visibility across AI platforms

Why higher ed is a useful lens

Universities bring everything search engines are supposed to reward:

  • High domain authority.
  • Deep, long-standing content libraries.
  • Strong brand recognition.

Yet in AI-generated answers, those advantages often don’t translate. When AI systems generate answers, they cite content that already matches the way users ask questions. That often means:

  • Comparisons.
  • “Top tools,” “top programs,” or “top options” lists.
  • Third-party explainers written about brands.

Those formats are dominated by aggregators and publishers – not the institutions themselves.

Google AI Overviews - Online MBA programs

AI doesn’t look for the biggest brand. It looks for the best answer. Higher education shows what happens when brands rely on authority alone and why every industry needs to rethink how it publishes.

So what do you do about it?

1. Get your foundations in order before chasing AI visibility

The most common question right now: “How do we show up in AI results?”

In many cases, I think the honest answer is to fix what’s already broken.

AI systems rely on the same signals that traditional search does: crawlability, structure, clarity. If your pages are blocked, poorly organized, or weighed down by technical debt, they won’t surface cleanly anywhere.

We’ve seen teams invest energy in AI conversations while core pages still struggle with:

  • Indexing issues.
  • Bloated or unclear page structures.
  • Content written for storytelling, not retrieval.

Start with your traditional SEO foundation. AI systems can only work with what’s structurally sound.

Dig deeper: AI search is growing, but SEO fundamentals still drive most traffic

2. Optimize content for retrieval, not just reading

AI search engines favor content that can be lifted cleanly and reused without interpretation. The job of content shifts from “telling a complete story” to “delivering clear, extractable answers.”

Many brand pages technically contain the right information, but it’s buried in long-form prose or brand language that requires context to understand.

Content that performs well in AI answers tends to:

  • Lead with direct answers, not setup.
  • Use headings that map to search intent.
  • Separate ideas into self-contained sections.
  • Avoid forcing readers (or machines) to infer meaning.

This isn’t about shortening content. It’s about sharpening it. When intent is obvious, AI knows exactly what to pull and when to cite you.

3. Compete on format, not just authority

If AI keeps citing comparisons, lists, and explainers – and it does – brands probably need to own those formats themselves.

AI systems pull from content that already reflects how people evaluate options. When those pages don’t exist on your site, AI cites the aggregators and publishers instead.

To compete, brands need to publish:

  • Comparison pages that reflect real decision criteria.
  • “Best for X” content tied to specific use cases.
  • Standalone explainers that help buyers choose.

Put simply: publish what AI actually wants to cite.

Dig deeper: How to create answer-first content that AI models actually cite

4. Prioritize third-party platforms

Your website shouldn’t be doing all the work.

AI answers routinely pull from a mix of sources:

  • YouTube videos.
  • LinkedIn posts.
  • Instagram content.
  • Reddit threads (when relevant).
  • Brand content published on third-party platforms.

In some cases, being cited from a third-party platform matters more than where your site ranks.

We’ve seen AI Overviews where a brand’s YouTube video is cited alongside their webpage and third-party sources – all shaping the same answer. That blended source set is becoming the norm.

Google AI Overviews - Virtual data room

If your content strategy only prioritizes on-site publishing, you’re narrowing your chances of earning AI visibility.

Dig deeper: YouTube is no longer optional for SEO in the age of AI Overviews

Where things stand

AI search is moving faster than most SEO strategies are built to respond.

  • Discovery is happening earlier.
  • Trust is being assigned sooner.
  • Visibility is being decided before rankings ever come into play.

The question isn’t whether AI search will matter to your industry.

It’s whether you’ll be cited, overlooked, or summarized by someone else.

The brands that adapt now – not later – will be the ones that win.

Google lists Googlebot file limits for crawling

Google has updated two of its help documents to explain the limits of Googlebot when it crawls. Specifically, how much Googlebot can consume by filetype and format.

The limits. The limits, some of which were documented already and are not new, include:

  • 15MB for web pages: Google wrote, “By default, Google’s crawlers and fetchers only crawl the first 15MB of a file.”
  • 64MB for PDF files: Google wrote, “When crawling for Google Search, Googlebot crawls the first 2MB of a supported file type, and the first 64MB of a PDF file.”
  • 2MB for supported files types: Google wrote, “When crawling for Google Search, Googlebot crawls the first 2MB of a supported file type, and the first 64MB of a PDF file.”

Note, these limits are pretty large and the vast majority of websites do not need to be concerned with these limits.

Full text. Here is what Google posted fully in its help documents:

  • “By default, Google’s crawlers and fetchers only crawl the first 15MB of a file. Any content beyond this limit is ignored. Individual projects may set different limits for their crawlers and fetchers, and also for different file types. For example, a Google crawler may set a larger file size limit for a PDF than for HTML.”
  • “When crawling for Google Search, Googlebot crawls the first 2MB of a supported file type, and the first 64MB of a PDF file. From a rendering perspective, each resource referenced in the HTML (such as CSS and JavaScript) is fetched separately, and each resource fetch is bound by the same file size limit that applies to other files (except PDF files). Once the cutoff limit is reached, Googlebot stops the fetch and only sends the already downloaded part of the file for indexing consideration. The file size limit is applied on the uncompressed data. Other Google crawlers, for example Googlebot Video and Googlebot Image, may have different limits.”

Why we care. It is important to know of these limits but again, most sites will likely never even come close to these limits. That being said these are the document limits of Googlebot’s crawling.

Why Google’s Performance Max advice often fails new advertisers

When Google reps push Performance Max before your account is ready

One of the biggest reasons new advertisers end up in underperforming Performance Max campaigns is simple: they followed Google’s advice.

Google Ads reps are often well-meaning and, in many cases, genuinely helpful at a surface level. 

But it’s critical for advertisers – especially new ones – to understand who those reps work for, how they’re incentivized, and what their recommendations are actually optimized for.

Before defaulting to Google’s newest recommendation, it’s worth taking a step back to understand why the “shiny new toy” isn’t always the right move – and how advertisers can better advocate for strategies that serve their business, not just the platform.

Google reps are not strategic consultants

Google Ads reps play a specific role, and that role is frequently misunderstood.

They do not:

  • Manage your account long term.
  • Know your margins, cash flow, or true break-even ROAS.
  • Understand your internal goals, inventory constraints, or seasonality.
  • Get penalized when your ads lose money.

Their responsibility is not to build a sustainable acquisition strategy for your business. Instead, their primary objectives are to:

  • Increase platform and feature adoption.
  • Drive spend into newer campaign types.
  • Push automation, broad targeting, and machine learning.

That distinction matters.

Performance Max is Google’s flagship campaign type. It uses more inventory, more placements, and more automation across the entire Google ecosystem. 

From Google’s perspective, it’s efficient, scalable, and profitable. From a new advertiser’s perspective, however, it’s often premature and misaligned with early-stage needs.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

Performance Max benefits Google before it benefits you

Performance Max often benefits Google before it benefits the advertiser. 

Because it automatically spends across Search, Shopping, Display, YouTube, Discover, and Gmail, Google is given near-total discretion over where your budget is allocated. In exchange, advertisers receive limited visibility into what’s actually driving results.

For Google, this model is ideal. It monetizes more surfaces, accelerates adoption of automated bidding and targeting, and increases overall ad spend across the board. For advertisers – particularly those with new or low-data accounts – the reality looks different.

New accounts often end up paying for upper-funnel impressions before meaningful conversion data is available. 

Budgets are diluted across lower-intent placements, CPCs can spike unpredictably, and when performance declines, there’s very little insight into what to fix or optimize. 

You’re often left guessing whether the issue is creative, targeting, bidding, tracking, or placement.

This misalignment is exactly why Google reps so often recommend Performance Max even when an account lacks the data foundation required for it to succeed.

‘Best practice’ doesn’t mean best strategy for your business

What Google defines as “best practice” does not automatically translate into the best strategy for your business.

Google reps operate from generalized, platform-wide guidance rather than a custom account strategy. 

Their recommendations are typically driven by aggregated averages, internal adoption goals, and the products Google is actively promoting next – not by the unique realities of your business.

They are not built around your specific business model, your customer acquisition cost tolerance, your testing and learning roadmap, or your need for early clarity and control. 

As a result, strategies that may work well at scale for mature, data-rich accounts often fail to deliver the same results for new or growing advertisers.

What’s optimal for Google at scale isn’t always optimal for an advertiser who is still validating demand, pricing, and profitability.

Dig deeper: Google Ads best practices: The good, the bad and the balancing act

Smart advertisers earn automation – they don’t start with it

Smart advertisers understand that automation is something you earn, not something you start with.

Even today, Google Shopping Ads remain one of the most effective tools for new ad accounts because they are controlled, intent-driven, and rooted in real purchase behavior.

Shopping campaigns rely far less on historical conversion volume and far more on product feed relevance, pricing, and search intent.

That makes them uniquely well-suited for advertisers who are still learning what works, what converts, and what deserves more budget.

To understand how this difference plays out in practice, consider what happened to a small chocolatier that came to me after implementing Performance Max based on guidance from their dedicated Google Ads rep.

Get the newsletter search marketers rely on.


A real-world example: When Performance Max goes wrong

The challenge was straightforward: The retailer’s Google Ads account was new, and Performance Max was positioned as the golden ticket to quickly building nationwide demand.

The result was disastrous.

  • Over $3,000 was spent with a return of just one purchase.
  • Traffic to the website and YouTube channel remained low despite the spend.
  • CPCs climbed as high as $50 per click.
  • ROAS was effectively nonexistent. 

To make matters worse, conversion tracking had not been set up correctly, causing Google to report inflated and inaccurate sales numbers that didn’t align with Shopify at all.

Understandably, the retailer lost confidence – not just in Performance Max, but in paid advertising as a whole. Before walking away entirely, they reached out to me.

Recognizing that this was a new account with no reliable data, I immediately reverse-engineered the setup into a standard Google Shopping campaign. 

We properly connected Google Ads and Google Merchant Center to Shopify to ensure clean, accurate tracking.

From there, the campaign was segmented by product groups, allowing for intentional bidding and clearer performance signals.

Within two weeks, real sales started coming through.

By the end of the month, the brand had acquired 56 new customers at a $53 cost per lead, with an average order value ranging from $115 to $200. 

More importantly, the account now had clean data, clear winners, and a foundation that could actually support automation in the future.

Dig deeper: The truth about Google Ads recommendations (and auto-apply)

Why Shopping ads still work – and still matter

By starting with Shopping campaigns, advertisers can validate products, pricing, and conversion tracking while building clean, reliable data at the product and SKU level.

This early-stage performance proves demand, highlights top-performing items, and trains Google’s algorithm with meaningful purchase behavior.

Shopping Ads also offer a higher level of control and transparency than Performance Max. 

Advertisers can segment by product category, brand, margin, or performance tier, apply negative keywords, and intentionally allocate budget to what’s actually profitable. 

When something underperforms, it’s clear why – and when something works, it’s easy to scale.

This level of insight is invaluable early on, when every dollar spent should be contributing to learning, not just impressions.

The case for a hybrid approach

Standard Shopping consistently outperforms Performance Max for accounts that require granular control over product groups and bidding – especially when margins vary significantly across SKUs and precise budget allocation matters. 

It allows advertisers to double down on proven winners with exact targeting, intentional bids, and full visibility into performance.

That said, once a Shopping campaign has been running long enough to establish clear performance patterns, a hybrid approach can be extremely effective.

Performance Max can play a complementary role for discovery, particularly for advertisers managing broad product catalogs or limited optimization bandwidth. 

Used selectively, it can help test new products, reach new audiences, and expand beyond existing demand – without sacrificing the stability of core revenue drivers.

While Performance Max reduces transparency and control, pairing it with Standard Shopping for established performers creates a balanced strategy that prioritizes profitability while still allowing room for scalable growth.

Dig deeper: 7 ways to segment Performance Max and Shopping campaigns

Control first, scale second

Google reps are trained to recommend what benefits the platform first, not what’s safest or most efficient for a new advertiser learning their market. 

While Performance Max can be powerful, it only works well when it’s fueled by strong, reliable data – something most new accounts simply don’t have yet.

Advertisers who prioritize predictable performance, cleaner insights, and sustainable growth are better served by starting with Google Shopping Ads, where intent is high, control is stronger, and optimization is transparent. 

By using Shopping campaigns to validate products, understand true acquisition costs, and build confidence in what actually converts, businesses create a solid foundation for automation.

From there, Performance Max can be layered in deliberately and profitably – used as a tool to scale proven success rather than a shortcut that drains budget. 

That approach isn’t anti-Google. It’s disciplined, strategic advertising designed to protect spend and drive long-term results.

Microsoft launches Publisher Content Marketplace for AI licensing

The future of remarketing? Microsoft bets on impressions, not clicks

Microsoft Advertising today launched the Publisher Content Marketplace (PCM), a system that lets publishers license premium content to AI products and get paid based on how that content is used.

How it works. PCM creates a direct value exchange. Publishers set licensing and usage terms, while AI builders discover and license content for specific grounding scenarios. The marketplace also includes usage-based reporting, giving publishers visibility into how their content performs and where it creates the most value.

Designed to scale. PCM is designed to avoid one-off licensing deals between individual publishers and AI providers. Participation is voluntary, ownership remains with publishers, and editorial independence stays intact. The marketplace supports everyone from global publishers to smaller, specialized outlets.

Why we care. As AI systems shift from answering questions to making decisions, content quality matters more than ever. As agents increasingly guide purchases, finance, and healthcare choices, ads and sponsored messages will sit alongside — or draw from — premium content rather than generic web signals. That raises the bar for credibility and points to a future where brand alignment with trusted publishers and AI ecosystems directly impacts performance.

Early traction. Microsoft Advertising co-designed PCM with major U.S. publishers, including Business Insider, Condé Nast, Hearst, The Associated Press, USA TODAY, and Vox Media. Early pilots grounded Microsoft Copilot responses in licensed content, with Yahoo among the first demand partners now onboarding.

What’s next. Microsoft plans to expand the pilot to more publishers and AI builders that share a core belief: as the AI web evolves, high-quality content should be respected, governed, and paid for.

The big picture. In an agentic web, AI tools increasingly summarize, reason, and recommend through conversation. Whether the topic is medical safety, financial eligibility, or a major purchase, outcomes depend on access to trusted, authoritative sources — many of which sit behind paywalls or in proprietary archives.

The tension. The traditional web bargain was simple: publishers shared content, and platforms sent traffic back. That model breaks down when AI delivers answers directly, cutting clicks while still depending on premium content to perform well.

Bottom line. If AI is going to make better decisions, it needs better inputs — and PCM is Microsoft’s bet that a sustainable content economy can power the next phase of the agentic web.

Microsoft’s announcement. Building Toward a Sustainable Content Economy for the Agentic Web

Inspiring examples of responsible and realistic vibe coding for SEO

Vibe coding is a new way to create software using AI tools such as ChatGPT, Cursor, Replit, and Gemini. It works by describing to the tool what you want in plain language and receiving written code in return. You can then simply paste the code into an environment (such as Google Colab), run it, and test the results, all without ever actually programming a single line of code.

Collins Dictionary named “vibe coding” word of the year in 2025, defining it as “the use of artificial intelligence prompted by natural language to write computer code.”

In this guide, you’ll understand how to start vibe coding, learn its limitations and risks, and see examples of great tools created by SEOs to inspire you to vibe code your own projects.

Vibe coding variations

While “vibe coding” is used as an umbrella term, there are subsets of coding with support or AI, including the following:

TypeDescriptionTools
AI-assisted coding AI helps write, refactor, explain, or debug code. Used by actual developers or engineers to support their complex work.GitHub Copilot, Cursor, Claude, Google AI Studio
Vibe codingPlatforms that handle everything except the prompt/idea. AI does most of the work.ChatGPT, Replit, Gemini, Google AI Studio
No-code platformsPlatforms that handle everything you ask (“drag and drop” visual updates while the code happens in the background). They tend to use AI but existed long before AI became mainstream.Notion, Zapier, Wix

We’ll focus exclusively on vibe coding in this guide. 

With vibe coding, while there’s a bit of manual work to be done, the barrier is still low — you basically need a ChatGPT account (free or paid) and access to a Google account (free). Depending on your use case, you might also need access to APIs or SEO tools subscriptions such as Semrush or Screaming Frog.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

To set expectations, by the end of this guide, you’ll know how to run a small program on the cloud. If you expect to build a SaaS or software to sell, AI-assisted coding is a more reasonable option to take, which will involve costs and deeper coding knowledge.

Vibe coding use cases

Vibe coding is great when you’re trying to find outcomes for specific buckets of data, such as finding related links, adding pre-selected tags to articles, or doing something fun where the outcome doesn’t need to be exact.

For example, I’ve built an app to create a daily drawing for my daughter. I type a phrase about something that she told me about her day (e.g., “I had carrot cake at daycare”). The app has some examples of drawing styles I like and some pictures of her. The outputs (drawings) are the final work as they come from AI.

When I ask for specific changes, however, the program tends to worsen and redraw things I didn’t ask for. I once asked to remove a mustache and it recolored the image instead. 

If my daughter were a client who’d scrutinize the output and require very specific changes, I’d need someone who knows Photoshop or similar tools to make specific improvements. In this case, though, the results are good enough. 

Building commercial applications solely on vibe coding may require a company to hire vibe coding cleaners. However, for a demo, MVP (minimum viable product), or internal applications, vibe coding can be a useful, effective shortcut. 

How to create your SEO tools with vibe coding

Using vibe coding to create your own SEO tools require three steps:

  1. Write a prompt describing your code
  2. Paste the code into a tool such as Google Colab
  3. Run the code and analyze the results

Here’s a prompt example for a tool I built to map related links at scale. After crawling a website using Screaming Frog and extracting vector embeddings (using the crawler’s integration with OpenAI), I vibe coded a tool that would compare the topical distance between the vectors in each URL.

This is exactly what I wrote on ChatGPT:

I need a Google Colab code that will use OpenAI to:

Check the vector embeddings existing in column C. Use cosine similarity to match with two suggestions from each locale (locale identified in Column A). 

The goal is to find which pages from each locale are the most similar to each other, so we can add hreflang between these pages.

I’ll upload a CSV with these columns and expect a CSV in return with the answers.

Then I pasted the code that ChatGPT created on Google Colab, a free Jupyter Notebook environment that allows users to write and execute Python code in a web browser. It’s important to run your program by clicking on “Run all” in Google Colab to test if the output does what you expected.

This is how the process works on paper. Like everything in AI, it may look perfect, but it’s not always functioning exactly how you want it. 

You’ll likely encounter issues along the way — luckily, they’re simple to troubleshoot.

First, be explicit about the platform you’re using in your prompt. If it’s Google Colab, say the code is for Google Colab. 

You might still end up with code that requires packages that aren’t installed. In this case, just paste the error into ChatGPT and it’ll likely regenerate the code or find an alternative. You don’t even need to know what the package is, just show the error and use the new code. Alternatively, you can ask Gemini directly in your Google Colab to fix the issue and update your code directly.

AI tends to be very confident about anything and could return completely made-up outputs. One time I forgot to say the source data would come from a CSV file, so it simply created fake URLs, traffic, and graphs. Always check and recheck the output because “it looks good” can sometimes be wrong.

If you’re connecting to an API, especially a paid API (e.g., from Semrush, OpenAI, Google Cloud, or other tools), you’ll need to request your own API key and keep in mind usage costs. 

Should you want an even lower execution barrier than Google Colab, you can try using Replit. 

Simply prompt your request and the software will create the code, design, and allow testing all on the same screen. This means a lower chance of coding errors, no copy and paste, and a URL you can share right away with anyone to see your project built with a nice design. (You should still check for poor outputs and iterate with prompts until your final app is built.)

Keep in mind that while Google Colab is free (you’ll only spend if you use API keys), Replit charges a monthly subscription and per-usage fee on APIs. So the more you use an app, the more expensive it gets.

Inspiring examples of SEO vibe-coded tools

While Google Colab is the most basic (and easy) way to vibe code a small program, some SEOs are taking vibe coding even further by creating programs that are turned into Chrome extensions, Google Sheets automation, and even browser games.

The goal behind highlighting these tools is not only to showcase great work by the community, but also to inspire, build, and adapt to your specific needs. Do you wish any of these tools had different features? Perhaps you can build them for yourself — or for the world.

GBP Reviews Sentiment Analyzer (Celeste Gonzalez)

After vibe coding some SEO tools on Google Colab, Celeste Gonzalez, Director of SEO Testing at RicketyRoo Inc, took her vibing skills a step further and created a Chrome extension. “I realized that I don’t need to build something big, just something useful,” she explained.

Her browser extension, the GBP Reviews Sentiment Analyzer, summarizes sentiment analysis for reviews over the last 30 days and review velocity. It also allows the information to be exported into a CSV. The extension works on Google Maps and Google Business Profile pages.

Instead of ChatGPT, Celeste used a combination of Claude (to create high-quality prompts) and Cursor (to paste the created prompts and generate the code).

AI tools used: Claude (Sunner 4.5 model) and Cursor 

APIs used: Google Business Profile API (free)

Platform hosting: Chrome Extension

Knowledge Panel Tracker (Gus Pelogia)

I became obsessed with the Knowledge Graph in 2022, when I learned how to create and manage my own knowledge panel. Since then, I found out that Google has a Knowledge Graph Search API that allows you to check the confidence score for any entity.

This vibe-coded tool checks the score for your entities daily (or at any frequency you want) and returns it in a sheet. You can track multiple entities at once and just add new ones to the list at any time.

The Knowledge Panel Tracker runs completely on Google Sheets, and the Knowledge Graph Search API is free to use. This guide shows how to create and run it in your own Google account, or you can see the spreadsheet here and just update the API key under Extensions > App Scripts. 

AI models used: ChatGPT 5.1

APIs used: Google Knowledge Graph API (free)

Platform hosting: Google Sheets

Inbox Hero Game (Vince Nero)

How about vibe coding a link building asset? That’s what Vince Nero from BuzzStream did when creating the Inbox Hero Game. It requires you to use your keyboard to accept or reject a pitch within seconds. The game is over if you accept too many bad pitches.

Inbox Hero Game is certainly more complex than running a piece of code on Google Colab, and it took Vince about 20 hours to build it all from scratch. “I learned you have to build things in pieces. Design the guy first, then the backgrounds, then one aspect of the game mechanics, etc.,” he said.

The game was coded in HTML, CSS, and JavaScript. “I uploaded the files to GitHub to make it work. ChatGPT walked me through everything,” Vince explained.

According to him, the longer the prompt continued, the less effective ChatGPT became, “to the point where [he’d] have to restart in a new chat.” 

This issue was one of the hardest and most frustrating parts of creating the game. Vince would add a new feature (e.g., score), and ChatGPT would “guarantee” it found the error, update the file, but still return with the same error. 

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

In the end, Inbox Hero Game is a fun game that demonstrates it’s possible to create a simple game without coding knowledge, yet taking steps to perfect it would be more feasible with a developer.

AI models used: ChatGPT

APIs used: None

Platform hosting: Webpage

Vibe coding with intent

Vibe coding won’t replace developers, and it shouldn’t. But as these examples show, it can responsibly unlock new ways for SEOs to prototype ideas, automate repetitive tasks, and explore creative experiments without heavy technical lift. 

The key is realism: Use vibe coding where precision isn’t mission-critical, validate outputs carefully, and understand when a project has outgrown “good enough” and needs additional resources and human intervention.

When approached thoughtfully, vibe coding becomes less about shipping perfect software and more about expanding what’s possible — faster testing, sharper insights, and more room for experimentation. Whether you’re building an internal tool, a proof of concept, or a fun SEO side project, the best results come from pairing curiosity with restraint.

LinkedIn: AI-powered search cut traffic by up to 60%

AEO playbook

AI-powered search gutted LinkedIn’s B2B awareness traffic. Across a subset of topics, non-brand organic visits fell by as much as 60% even while rankings stayed stable, the company said.

  • LinkedIn is moving past the old “search, click, website” model and adopting a new framework: “Be seen, be mentioned, be considered, be chosen.”

By the numbers. In a new article, LinkedIn said its B2B organic growth team started researching Google’s Search Generative Experience (SGE) in early 2024. By early 2025, when SGE evolved into AI Overviews, the impact became significant.

  • Non-brand, awareness-driven traffic declined by up to 60% across a subset of B2B topics.
  • Rankings stayed stable, but click-through rates fell (by an undisclosed amount).

Yes, but. LinkedIn’s “new learnings” are more like a rehash of established SEO/AEO best practices. Here’s what LinkedIn’s content-level guidance consists of:

  • Use strong headings and a clear information hierarchy.
  • Improve semantic structure and content accessibility.
  • Publish authoritative, fresh content written by experts.
  • Move fast, because early movers get an edge.

Why we care. These tactics should all sound familiar. These are technical SEO and content-quality fundamentals. LinkedIn’s article offers little new in terms of tactics. It’s just updated packaging for modern SEO/AEO and AI visibility.

Dig deeper. How to optimize for AI search: 12 proven LLM visibility tactics

Measurement is broken. LinkedIn said its big challenge is the “dark” funnel. It can’t quantify how visibility in LLM answers impacts the bottom line, especially when discovery happens without a click.

  • LinkedIn’s B2B marketing websites saw triple-digit growth in LLM-driven traffic and that it can track conversion from those visits.
    • Yes, but: Many websites are also seeing triple-digit (or more) growth in LLM-driven traffic. Because it’s an emerging channel. That said, this is still a tiny amount of overall traffic right now (1% or less for most sites).

What LinkedIn is doing. LinkedIn created an AI Search Taskforce spanning SEO, PR, editorial, product marketing, product, paid media, social, and brand. Key actions included:

  • Correcting misinformation that showed up in AI responses.
  • Publishing new owned content optimized for generative visibility.
  • Testing LinkedIn (social) content to validate its strength in AI discovery.

Is it working? LinkedIn said early tests produced a meaningful lift in visibility and citations, especially from owned content. At least one external datapoint (Semrush, Nov. 10, 2025) suggested that LinkedIn has a structural advantage in AI search:

  • Google AI Mode cited LinkedIn in roughly 15% of responses.
  • LinkedIn was the #2 most-cited domain in that dataset, behind YouTube.

Incomplete story. LinkedIn’s article is an interesting read, but it’s light on specifics. Missing details include:

  • The exact topic set behind the “up to 60%” decline.
  • Exactly how much click-through rates “softened.”
  • Sample size and timeframe.
  • How “industry-wide” comparisons were calculated.
  • What tests were run, what moved citation share, and by how much.

Bottom line. LinkedIn is right that visibility is the new currency. However, it hasn’t shown enough detail to prove its new playbook is meaningfully different from doing some SEO (yes, SEO) fundamentals.

LinkedIn’s article. How LinkedIn Marketing Is Adapting to AI-Led Discovery

Are we ready for the agentic web?

Are we ready for the agentic web?

Innovations are coming at marketers and consumers faster than before, raising the question: Are we actually ready for the agentic web?

To answer that question, it’s important to unpack a few supporting ones:

  • What’s the agentic web?
  • How can the agentic web be used?
  • What are the pros and cons of the agentic web?

It’s important to note that this article isn’t a mandate for AI skeptics to abandon the rational questions they have about the agentic web. 

Nor is it intended to place any judgment on how you, as a consumer or professional, engage with the agentic web.

LinkedIn poll on Copilot Checkout

With thoughts and feelings so divided on the agentic web, this article aims to provide clear insight into how to think about it in earnest, without the branding or marketing fluff.

Disclosure: I am a Microsoft employee and believe in the path Microsoft’s taking with the agentic web. However, this article will attempt to be as platform-agnostic as possible.

What’s the agentic web? 

The agentic web refers to sophisticated tools, or agents, trained on our preferences that act with our consent to accomplish time-consuming tasks.

In simple terms, when I use one-click checkout, I allow my saved payment information to be passed to the merchant’s accounts receivable systems. 

Neither the merchant nor I must write down all the details or be involved beyond consenting to send and receive payment.

For fun, I put this question to four different AI models, and the responses are telling: 

  • Copilot: “The agentic web is an emerging layer of the internet where AI agents can reason and take actions on behalf of people or organizations, turning expressed intent into outcomes while preserving user choice, transparency, and control.” 
  • Gemini: “The agentic web is an evolution of the internet where autonomous AI agents – rather than just human users – interact with websites, APIs, and each other to perform complex tasks and make decisions on our behalf. “
  • Perplexity: “The agentic web is an emerging phase of the internet where autonomous AI agents act as the primary interface, navigating, interpreting, and taking actions across websites and services on behalf of users with minimal human involvement.” 
  • Claude: “The agentic web is an evolution of the internet where AI agents autonomously navigate, interact with, and complete tasks across websites and services on behalf of users.” 

I begin with how different models answer the question because it’s important to understand that each one is trained on different information, and outcomes will inevitably vary.

It’s worth noting that with the same prompt, defining the agentic web in one sentence, three out of four models focus on diminishing the human role in navigating the web, while one makes a point to emphasize the significance of human involvement, preserving user choice, transparency, and control.

Two out of four refer to the agentic web as a layer or phase rather than an outright evolution of the web. 

This is likely where the sentiment divide on the agentic web stems from.

Some see it as a consent-driven layer designed to make life easier, while others see it as a behemoth that consumes content, critical thinking, and choice.

It’s noteworthy that one model, Gemini, calls out APIs as a means of communication in the agentic web. APIs are essentially libraries of information that can be referenced, or called, based on the task you are attempting to accomplish. 

This matters because APIs will become increasingly relevant in the agentic web, as saved preferences must be organized in ways that are easily understood and acted upon.

Defining the agentic web requires spending some time digging into two important protocols – ACP and UCP.

Dig deeper: AI agents in SEO: What you need to know

Agentic Commerce Protocol: Optimized for action inside conversational AI 

The Agentic Commerce Protocol, or ACP, is designed around a specific moment: when a user has already expressed intent and wants the AI to act.

The core idea behind ACP is simple. If a user tells an AI assistant to buy something, the assistant should be able to do so safely, transparently, and without forcing the user to leave the conversation to complete the transaction.

ACP enables this by standardizing how an AI agent can:

  • Access merchant product data.
  • Confirm availability and price.
  • Initiate checkout using delegated, revocable payment authorization.

The experience is intentionally streamlined. The user stays in the conversation. The AI handles the mechanics. The merchant still fulfills the order.

This approach is tightly aligned with conversational AI platforms, particularly environments where users are already asking questions, refining preferences, and making decisions in real time. It prioritizes speed, clarity, and minimal friction.

Universal Commerce Protocol: Built for discovery, comparison, and lifecycle commerce 

The Universal Commerce Protocol, or UCP, takes a broader view of agentic commerce.

Rather than focusing solely on checkout, UCP is designed to support the entire shopping journey on the agentic web, from discovery through post-purchase interactions. It provides a common language that allows AI agents to interact with commerce systems across different platforms, surfaces, and payment providers. 

That includes: 

  • Product discovery and comparison.
  • Cart creation and updates.
  • Checkout and payment handling.
  • Order tracking and support workflows.

UCP is designed with scale and interoperability in mind. It assumes users will encounter agentic shopping experiences in many places, not just within a single assistant, and that merchants will want to participate without locking themselves into a single AI platform.

It’s tempting to frame ACP and UCP as competing solutions. In practice, they address different moments of the same user journey.

ACP is typically strongest when intent is explicit and the user wants something done now. UCP is generally strongest when intent is still forming and discovery, comparison, and context matter.

So what’s the agentic web? Is it an army of autonomous bots acting on past preferences to shape future needs? Is it the web as we know it, with fewer steps driven by consent-based signals? Or is it something else entirely?

The frustrating answer is that the agentic web is still being defined by human behavior, so there’s no clear answer yet. However, we have the power to determine what form the agentic web takes. To better understand how to participate, we now move to how the agentic web can be used, along with the pros and cons.

Dig deeper: The Great Decoupling of search and the birth of the agentic web

How can the agentic web be used? 

Working from the common theme across all definitions, autonomous action, we can move to applications.

Elmer Boutin has written a thoughtful technical view on how schema will impact agentic web compatibility. Benjamin Wenner has explored how PPC management might evolve in a fully agentic web. Both are worth reading.

Here, I want to focus on consumer-facing applications of the agentic web and how to think about them in relation to the tasks you already perform today.

Here are five applications of the agentic web that are live today or in active development.

1. Intent-driven commerce  

A user states a goal, such as “Find me the best running shoes under $150,” and an agent handles discovery, comparison, and checkout without requiring the user to manually browse multiple sites. 

How it works 

Rather than returning a list of links, the agent interprets user intent, including budget, category, and preferences. 

It pulls structured product information from participating merchants, applies reasoning logic to compare options, and moves toward checkout only after explicit user confirmation. 

The agent operates on approved product data and defined rules, with clear handoffs that keep the user in control. 

Implications for consumers and professionals 

Reducing decision fatigue without removing choice is a clear benefit for consumers. For brands, this turns discovery into high-intent engagement rather than anonymous clicks with unclear attribution. 

Strategically, it shifts competition away from who shouts the loudest toward who provides the clearest and most trusted product signals to agents. These agents can act as trusted guides, offering consumers third-party verification that a merchant is as reliable as it claims to be.

2. Brand-owned AI assistants 

A brand deploys its own AI agent to answer questions, recommend products, and support customers using the brand’s data, tone, and business rules.

How it works 

The agent uses first-party information, such as product catalogs, policies, and FAQs. 

Guardrails define what it can say or do, preventing inferences that could lead to hallucinations. 

Responses are generated by retrieving and reasoning over approved context within the prompt.

Implications for consumers and professionals 

Customers get faster and more consistent responses. Brands retain voice, accountability, and ownership of the experience. 

Strategically, this allows companies to participate in the agentic web without ceding their identity to a platform or intermediary. It also enables participation in global commerce without relying on native speakers to verify language.

3. Autonomous task completion 

Users delegate outcomes rather than steps, such as “Prepare a weekly performance summary” or “Reorder inventory when stock is low.” 

How it works 

The agent breaks the goal into subtasks, determines which systems or tools are needed, and executes actions sequentially. It pauses when permissions or human approvals are required. 

These can be provided in bulk upfront or step by step. How this works ultimately depends on how the agent is built. 

Implications for consumers and marketers 

We’re used to treating AI like interns, relying on micromanaged task lists and detailed prompts. As agents become more sophisticated, it becomes possible to treat them more like senior employees, oriented around outcomes and process improvement. 

That makes it reasonable to ask an agent to identify action items in email or send templates in your voice when active engagement isn’t required. Human choice comes down to how much you delegate to agents versus how much you ask them to assist.

Dig deeper: The future of search visibility: What 6 SEO leaders predict for 2026

Get the newsletter search marketers rely on.


4. Agent-to-agent coordination and negotiation 

Agents communicate with other agents on behalf of people or organizations, such as a buyer agent comparing offers with multiple seller agents. 

How it works 

Agents exchange structured information, including pricing, availability, and constraints. 

They apply predefined rules, such as budgets or policies, and surface recommended outcomes for human approval. 

Implications for consumers and marketers 

Consumers may see faster and more transparent comparisons without needing to manually negotiate or cross-check options. 

For professionals, this introduces new efficiencies in areas like procurement, media buying, or logistics, where structured negotiation can occur at scale while humans retain oversight.

5. Continuous optimization over time 

Agents don’t just act once. They improve as they observe outcomes.

How it works 

After each action, the agent evaluates what happened, such as engagement, conversion, or satisfaction. It updates its internal weighting and applies those learnings to future decisions.

Why people should care 

Consumers experience increasingly relevant interactions over time without repeatedly restating preferences. 

Professionals gain systems that improve continuously, shifting optimization from one-off efforts to long-term, adaptive performance. 

What are the pros and cons of the agentic web? 

Life is a series of choices, and leaning into or away from the agentic web comes with clear pros and cons.

Pros of leaning into the agentic web 

The strongest argument for leaning into the agentic web is behavioral. People have already been trained to prioritize convenience over process. 

Saved payment methods, password managers, autofill, and one-click checkout normalized the idea that software can complete tasks on your behalf once trust is established.

Agentic experiences follow the same trajectory. Rather than requiring users to manually navigate systems, they interpret intent and reduce the number of steps needed to reach an outcome. 

Cons of leaning into the agentic web 

Many brands will need to rethink how their content, data, and experiences are structured so they can be interpreted by automated systems and humans. What works for visual scanning or brand storytelling doesn’t always map cleanly to machine-readable signals.

There’s also a legitimate risk of overoptimization. Designing primarily for AI ingestion can unintentionally degrade human usability or accessibility if not handled carefully. 

Dig deeper: The enterprise blueprint for winning visibility in AI search

Pros of leaning away from the agentic web 

Choosing to lean away from the agentic web can offer clarity of stance. There’s a visible segment of users skeptical of AI-mediated experiences, whether due to privacy concerns, automation fatigue, or a loss of human control. 

Aligning with that perspective can strengthen trust with audiences who value deliberate, hands-on interaction.

Cons of leaning away from the agentic web 

If agentic interfaces become a primary way people discover information, compare options, or complete tasks, opting out entirely may limit visibility or participation. 

The longer an organization waits to adapt, the more expensive and disruptive that transition can become.

What’s notable across the ecosystem is that agentic systems are increasingly designed to sit on top of existing infrastructure rather than replace it outright. 

Avoiding engagement with these patterns may not be sustainable over time. If interaction norms shift and systems aren’t prepared, the combination of technical debt and lost opportunity may be harder to overcome later.

Where the agentic web stands today

The agentic web is still taking form, shaped largely by how people choose to use it. Some organizations are already applying agentic systems to reduce friction and improve outcomes. Others are waiting for stronger trust signals and clearer consent models.

Either approach is valid. What matters is understanding how agentic systems work, where they add value, and how emerging protocols are shaping participation. That understanding is the foundation for deciding when, where, and how to engage with the agentic web.

7 digital PR secrets behind strong SEO performance

7 digital PR secrets behind strong SEO performance

Digital PR is about to matter more than ever. Not because it’s fashionable, or because agencies have rebranded link building with a shinier label, but because the mechanics of search and discovery are changing. 

Brand mentions, earned media, and the wider PR ecosystem are now shaping how both search engines and large language models understand brands. That shift has serious implications for how SEO professionals should think about visibility, authority, and revenue.

At the same time, informational search traffic is shrinking. Fewer people are clicking through long blog posts written to target top-of-funnel keywords. 

The commercial value in search is consolidating around high-intent queries and the pages that serve them: product pages, category pages, and service pages. Digital PR sits right at the intersection of these changes.

What follows are seven practical, experience-led secrets that explain how digital PR actually works when it’s done well, and why it’s becoming one of the most important tools in SEOs’ toolkit.

Secret 1: Digital PR can be a direct sales activation channel

Digital PR is usually described as a link tactic, a brand play or, more recently, as a way to influence generative search and AI outputs.

All of that’s true. What’s often overlooked is that digital PR can also drive revenue directly.

When a brand appears in a relevant media publication, it’s effectively placing itself in front of buyers while they are already consuming related information.

This is not passive awareness. It’s targeted exposure during a moment of consideration.

Platforms like Google are exceptionally good at understanding user intent, interests and recency. Anyone who has looked at their Discover feed after researching a product category has seen this in action. 

Digital PR taps into the same behavioral reality. You are not broadcasting randomly. You are appearing where buyers already are.

Two things tend to happen when this is executed well.

  • If your site already ranks for a range of relevant queries, your brand gains additional recognition in nontransactional contexts. Readers see your name attached to a credible story or insight. That familiarity matters.
  • More importantly, that exposure drives brand search and direct clicks. Some readers click straight through from the article. Others search for your brand shortly after. In both cases, they enter your marketing funnel with a level of trust that generic search traffic rarely has.

This effect is driven by basic behavioral principles such as recency and familiarity. While it’s difficult to attribute cleanly in analytics, the commercial impact is very real. 

We see this most clearly in direct-to-consumer, finance, and health markets, where purchase cycles are active and intent is high.

Digital PR is not just about supporting sales. In the right conditions, it’s part of the sales engine.

Dig deeper: Discoverability in 2026: How digital PR and social search work together

Secret 2: The mere exposure effect is one of digital PR’s biggest advantages

One of the most consistent patterns in successful digital PR campaigns is repetition.

When a brand appears again and again in relevant media coverage, tied to the same themes, categories, or areas of expertise, it builds familiarity. 

That familiarity turns into trust, and trust turns into preference. This is known as the mere exposure effect, and it’s fundamental to how brands grow.

In practice, this often happens through syndicated coverage. A strong story picked up by regional or vertical publications can lead to dozens of mentions across different outlets. 

Historically, many SEOs undervalued this type of coverage because the links were not always unique or powerful on their own.

That misses the point.

What this repetition creates is a dense web of co-occurrences. Your brand name repeatedly appears alongside specific topics, products, or problems. This influences how people perceive you, but it also influences how machines understand you.

For search engines and large language models alike, frequency and consistency of association matter. 

An always-on digital PR approach, rather than sporadic big hits, is one of the fastest ways to increase both human and algorithmic familiarity with a brand.

Secret 3: Big campaigns come with big risk, so diversification matters

Large, creative digital PR campaigns are attractive. They are impressive, they generate internal excitement, and they often win industry praise. The problem is that they also concentrate risk.

A single large campaign can succeed spectacularly, or it can fail quietly. From an SEO perspective, many widely celebrated campaigns underperform because they do not generate the links or mentions that actually move rankings.

This happens for a simple reason. What marketers like is not always what journalists need.

Journalists are under pressure to publish quickly, attract attention, and stay relevant to their audience. 

If a campaign is clever but difficult to translate into a story, it will struggle. If all your budget’s tied up in one idea, you have no fallback.

A diversified digital PR strategy spreads investment across multiple smaller campaigns, reactive opportunities, and steady background activity. 

This increases the likelihood of consistent coverage and reduces dependence on any single idea working perfectly.

In digital PR, reliability often beats brilliance.

Dig deeper: How to build search visibility before demand exists

Get the newsletter search marketers rely on.


Secret 4: The journalist’s the customer

One of the most common mistakes in digital PR is forgetting who the gatekeeper is.

From a brand’s perspective, the goal might be links, mentions, or authority. 

From a journalist’s perspective, the goal is to write a story that interests readers and performs well. These goals overlap, but they are not the same.

The journalist decides whether your pitch lives or dies. In that sense, they are the customer.

Effective digital PR starts by understanding what makes a journalist’s job easier. 

That means providing clear angles, credible data, timely insights, and fast responses. Think about relevance before thinking about links.

When you help journalists do their job well, they reward you with exposure. 

That exposure carries weight in search engines and in the training data that informs AI systems. The exchange is simple: value for value.

Treat journalists as partners, not as distribution channels.

Secret 5: Product and category page links are where SEO value is created

Not all links are equal.

From an SEO standpoint, links to product, category, and core service pages are often far more valuable than links to blog content. Unfortunately, they are also the hardest links to acquire through traditional outreach.

This is where digital PR excels.

Because PR coverage is contextual and editorial, it allows links to be placed naturally within discussions of products, services, or markets. When done correctly, this directs authority to the pages that actually generate revenue.

As informational content becomes less central to organic traffic growth, this matters even more.

Ranking improvements on high-intent pages can have a disproportionate commercial impact.

A relatively small number of high-quality, relevant links can outperform a much larger volume of generic links pointed at top-of-funnel content.

Digital PR should be planned with these target pages in mind from the outset.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Secret 6: Entity lifting is now a core outcome of digital PR

Search engines have long made it clear that context matters. The text surrounding a link, and the way a brand is described, help define what that brand represents.

This has become even more important with the rise of large language models. These systems process information in chunks, extracting meaning from surrounding text rather than relying solely on links.

When your brand is mentioned repeatedly in connection with specific topics, products, or expertise, it strengthens your position as an entity in that space. This is what’s often referred to as entity lifting.

The effect goes beyond individual pages. Brands see ranking improvements for terms and categories that were not directly targeted, simply because their overall authority has increased. 

At the same time, AI systems are more likely to reference and summarize brands that are consistently described as relevant sources.

Digital PR is one of the most scalable ways to build this kind of contextual understanding around a brand.

Secret 7: Authority comes from relevant sources and relevant sections

Former Google engineer Jun Wu discusses this in his book “The Beauty of Mathematics in Computer Science,” explaining that authority emerges from being recognized as a source within specific informational hubs. 

In practical terms, this means that where you are mentioned matters as much as how big the site is.

A link or mention from a highly relevant section of a large publication can be more valuable than a generic mention on the homepage. For example, a targeted subfolder on a major media site can carry strong authority, even if the domain as a whole covers many subjects.

Effective digital PR focuses on two things: 

  • Publications that are closely aligned with your industry and sections.
  • Subfolders that are tightly connected to the topic you want to be known for.

This is how authority is built in a way that search engines and AI systems both recognize.

Dig deeper: The new SEO imperative: Building your brand

Where digital PR now fits in SEO

Digital PR is no longer a supporting act to SEO. It’s becoming central to how brands are discovered, understood, and trusted.

As informational traffic declines and high-intent competition intensifies, the brands that win will be those that combine relevance, repetition, and authority across earned media. 

Digital PR, done properly, delivers all three.

Google: 75% of crawling issues come from two common URL mistakes

Google discussed its 2025 year-end report on crawling and indexing challenges for Google Search. The biggest issues were faceted navigation and action parameters, which accounted for about 75% of the problems, according to Google’s Gary Illyes. He shared this on the latest Search Off the Record podcast, published this morning.

What is the issue. Crawling issues can slow your site to a crawl, overload your server, and make your website unusable or inaccessible. If a bot gets stuck in an infinite crawling loop, recovery can take time.

  • “Once it discovers a set of URLs, it cannot make a decision about whether that URL space is good or not unless it crawled a large chunk of that URL space,” Illyes said. By then it is too late and your site has slowed to a halt.

The biggest crawling challenges. Based on the report, these are the main issues Google sees:

  • 50% come from faceted navigation. This is common on ecommerce sites, where endless filters for size, color, price, and similar options create near-infinite URL combinations.
  • 25% come from action parameters. These are URL parameters that trigger actions instead of meaningfully changing page content.
  • 10% come from irrelevant parameters. This includes session IDs, UTM tags, and other tracking parameters added to URLs.
  • 5% come from plugins or widgets. Some plugins and widgets generate problematic URLs that confuse crawlers.
  • 2% come from other “weird stuff.” This catch-all category includes issues such as double-encoded URLs and related edge cases.

Why we care. A clean URL structure without bot traps is essential to keep your server healthy, ensure fast page loads, and prevent search engines from getting confused about your canonical URLs.

The episode. Crawling Challenges: What the 2025 Year-End Report Tells Us.

💾

Gary Illyes says faceted navigation and action parameters dominate Google’s crawl waste, trapping bots in infinite URLs and straining servers.

Microsoft rolls out multi-turn search in Bing

Microsoft today rolled out multi-turn search globally in Bing. As you scroll down the search results page, a Copilot search box now dynamically appears at the bottom.

About multi-turn search. This type of search experience lets a user continue the conversation from the Bing search results page. Instead of starting over, the searcher types a follow-up question into the Copilot search box at the bottom of the results, allowing the search to build on the previous query. Here’s a screenshot of this feature:

Here’s a video of it in action:

What Microsoft said. Jordi Ribas, CVP, Head of Search at Microsoft, posted this news on X:

  • “After shipping in the US last year, multi-turn search in Bing is now available worldwide.
  • “Bing users don’t need to scroll up to do the next query, and the next turn will keep context when appropriate. We have seen gains in engagement and sessions per user in our online metrics, which reflect the positive user value of this approach.”

Why we care. Search engines like Google and Bing are pushing harder to move users into their AI experiences. Google is blending AI Overviews more deeply into AI Mode, even as many publishers object to how it handles their content. Bing has now followed suit, fully rolling out the Copilot search box at the bottom of search results after several months of testing.

💾

Bing's new multi-turn search is now global. As you scroll, a Copilot box appears at the bottom so follow-ups build on the last query.

Why most SEO failures are organizational, not technical

Why most SEO failures are organizational, not technical

I’ve spent over 20 years in companies where SEO sat in different corners of the organization – sometimes as a full-time role, other times as a consultant called in to “find what’s wrong.” Across those roles, the same pattern kept showing up.

The technical fix was rarely what unlocked performance. It revealed symptoms, but it almost never explained why progress stalled.

No governance

The real constraints showed up earlier, long before anyone read my weekly SEO reports. They lived in reporting lines, decision rights, hiring choices, and in what teams were allowed to change without asking permission. 

When SEO struggled, it was usually because nobody rightfully owned the CMS templates, priorities conflicted across departments, or changes were made without anyone considering how they affected discoverability.

I did not have a word for the core problem at the time, but now I do – it’s governance, usually manifested by its absence.

Two workplaces in my career had the conditions that allowed SEO to work as intended. Ownership was clear.

Release pathways were predictable. Leaders understood that visibility was something you managed deliberately, not something you reacted to when traffic dipped.

Everywhere else, metadata and schema were not the limiting factor. Organizational behavior was.

Dig deeper: How to build an SEO-forward culture in enterprise organizations

Beware of drift

Once sales pressures dominate each quarter, even technically strong sites undergo small, reasonable changes:

  • Navigation renamed by a new UX hire.
  • Wording adjusted by a new hire on the content team.
  • Templates adjusted for a marketing campaign.
  • Titles “cleaned up” by someone outside the SEO loop.

None of these changes look dangerous in isolation – if you know before they occur.

Over time, they add up. Performance slides, and nobody can point to a single release or decision where things went wrong.

This is the part of SEO most industry commentary skips. Technical fixes are tangible and teachable. Organizational friction is not. Yet that friction is where SEO outcomes are decided, usually months before any visible decline.

SEO loses power when it lives in the wrong place

I’ve seen this drift hurt rankings, with SEO taking the blame. In one workplace, leadership brought in an agency to “fix” the problem, only for it to confirm what I’d already found: a lack of governance caused the decline.

Where SEO sits on the org chart determines whether you see decisions early or discover them after launch. It dictates whether changes ship in weeks or sit in the backlog for quarters.

I have worked with SEO embedded under marketing, product, IT, and broader omnichannel teams. Each placement created a different set of constraints.

When SEO sits too low, decisions that reshape visibility ship first and get reviewed later — if they are reviewed at all.

  • Engineering adjusted components to support a new security feature. In one workplace, a new firewall meant to stop scraping also blocked our own SEO crawling tools.
  • Product reorganized navigation to “simplify” the user journey. No one asked SEO how it would affect internal PageRank.
  • Marketing “refreshed” content to match a campaign. Each change shifted page purpose, internal linking, and consistency — the exact signals search engines and AI systems use to understand what a site is about.

Dig deeper: SEO stakeholders: Align teams and prove ROI like a pro

Positioning the SEO function

Without a seat at the right table, SEO becomes a cleanup function.

When one operational unit owns SEO, the work starts to reflect that unit’s incentives.

  • Under marketing, it becomes campaign-driven and short-term.
  • Under IT, it competes with infrastructure work and release stability.
  • Under product, it gets squeezed into roadmaps that prioritize features over discoverability.

The healthiest performance I’ve seen came from environments where SEO sat close enough to leadership to see decisions early, yet broad enough to coordinate with content, engineering, analytics, UX, and legal.

In one case, I was a high-priced consultant, and every recommendation was implemented. I haven’t repeated that experience since, but it made one thing clear: VP-level endorsement was critical. That client doubled organic traffic in eight months and tripled it over three years.

Unfortunately, the in-house SEO team is just another team that might not get the chance to excel. Placement is not everything, but it is the difference between influencing the decision and fixing the outcome.

Get the newsletter search marketers rely on.


Hiring mistakes

The second pattern that keeps showing up is hiring – and it surfaces long before any technical review.

Many SEO programs fail because organizations staff strategically important roles for execution, when what they really need is judgment and influence. This isn’t a talent shortage. It’s a screening problem

The SEO manager often wears multiple hats, with SEO as a minor one. When they don’t understand SEO requirements, they become a liability, and the C-suite rarely sees it.

Across many engagements, I watched seasoned professionals passed over for younger candidates who interviewed well, knew the tool names, and sounded confident.

HR teams defaulted to “team fit” because it was easier to assess than a candidate’s ability to handle ambiguity, challenge bad decisions, or influence work across departments.

SEO excellence depends on lived experience. Not years on a résumé, but having seen the failure modes up close:

  • Migrations that wiped out templates.
  • Restructures that deleted category pages.
  • “Small” navigation changes that collapsed internal linking.

Those experiences build judgment. Judgment is what prevents repeat mistakes. Often, that expertise is hard to put in a résumé.

Without SEO domain literacy, hiring becomes theater. But we can’t blame HR, which has to hire people for all parts of the business. Its only expertise is HR.

Governance needs to step in.

One of the most reliable ways to improve recruitment outcomes is simple: let the SEO leader control the shortlist.

Fit still matters. Competence matters first. When the person accountable for results shapes the hiring funnel, the best candidates are chosen.

SEO roles require the ability to change decisions, not just diagnose problems. That skill does not show up in a résumé keyword scan.

Dig deeper: The top 5 strategic SEO mistakes enterprises make (and how to avoid them)

When priorities pull in different directions

Every department in a large organization has legitimate goals.

  • Product wants momentum.
  • Engineering wants predictable releases.
  • Marketing wants campaign impact.
  • Legal wants risk reduction.

Each team can justify its decisions – and SEO still absorbs the cost.

I have seen simple structural improvements delayed because engineering was focused on a different initiative.

At one workplace, I was asked how much sales would increase if my changes were implemented.

I have seen content refreshed for branding reasons that weakened high-converting pages. Each decision made sense locally. Collectively, they reshaped the site in ways nobody fully anticipated.

Today, we face an added risk: AI systems now evaluate content for synthesis. When content changes materially, an LLM may stop citing us as an authority on that topic.

Strong visibility governance can prevent that.

The organizations that struggled most weren’t the ones with conflict. They were the ones that failed to make trade-offs explicit.

What are we giving up in visibility to gain speed, consistency, or safety? When that question is never asked, SEO degrades quietly.

What improved outcomes was not a tool. It was governance: shared expectations and decision rights.

When teams understood how their work affected discoverability, alignment followed naturally. SEO stopped being the team that said “no” and became the function that clarified consequences.

International SEO improves when teams stop shipping locally good changes that are globally damaging. Local SEO improves when there is a single source of location truth.

Ownership gaps

Many SEO problems trace back to ownership gaps that only become visible once performance declines.

  • Who owns the CMS templates?
  • Who defines metadata standards?
  • Who maintains structured data? Who approves content changes?

When these questions have no clear answer, decisions stall or happen inconsistently. The site evolves through convenience rather than intent.

In contrast, the healthiest organizations I worked with shared one trait: clarity.

People knew which decisions they owned and which ones required coordination. They did not rely on committees or heavy documentation because escalation paths were already understood.

When ownership is clear, decisions move. When ownership is fragmented, even straightforward SEO work becomes difficult.

Dig deeper: How to win SEO allies and influence the brand guardians

Healthy environments for SEO to succeed

Across my career, the strongest results came from environments where SEO had:

  • Early involvement in upcoming changes.
  • Predictable collaboration with engineering.
  • Visibility into product goals.
  • Clear authority over content standards.
  • Stable templates and definitions.
  • A reliable escalation path when priorities conflicted.
  • Leaders who understood visibility as a long-term asset.

These organizations were not perfect. They were coherent.

People understood why consistency mattered. SEO was not a reactive service. It was part of the infrastructure.

What leaders can do now

If you lead SEO inside a complex organization, the most effective improvements come from small, deliberate shifts in how decisions get made:

  • Place SEO where it can see and influence decisions early.
  • Let SEO leaders – not HR – shape candidate shortlists.
  • Hire for judgment and influence, not presentation.
  • Create predictable access to product, engineering, content, analytics, and legal.
  • Stabilize page purpose and structural definitions.
  • Make the impact of changes visible before they ship.

These shifts do not require new software. They require decision clarity, discipline, and follow-through.

Visibility is an organizational outcome

SEO succeeds when an organization can make and enforce consistent decisions about how it presents itself. Technical work matters, but it can’t offset structures pulling in different directions.

The strongest SEO results I’ve seen came from teams that focused less on isolated optimizations and more on creating conditions where good decisions could survive change. That’s visibility governance.

When SEO performance falters, the most durable fixes usually start inside the organization.

Dig deeper: What 15 years in enterprise SEO taught me about people, power, and progress

Google Ads API update cracks open Performance Max by channel

Is your account ready for Google AI Max? A pre-test checklist

As part of the v23 Ads API launch, Performance Max campaigns can now be reported by channel, including Search, YouTube, Display, Discover, Gmail, Maps, and Search Partners. Previously, performance data was largely grouped into a single mixed category.

The change under the hood. Earlier API versions typically returned a MIXED value for the ad_network_type segment in Performance Max campaigns. With v23, those responses now break out into specific channel enums — a meaningful shift for reporting and optimization.

Why we care. Google Ads API v23 doesn’t just add features — it changes how advertisers understand Performance Max. The update introduces channel-level reporting, giving marketers long-requested visibility into where PMax ads actually run.

How advertisers can use it. Channel-level data is available at the campaign, asset group, and asset level, allowing teams to see how individual creatives perform across Google properties. When combined with v22 segments like ad_using_video and ad_using_product_data, advertisers can isolate results such as video performance on YouTube or Shopping ads on Search.

For developers. Upgrading to v23 will surface more detailed reporting than before. Reporting systems that relied on the legacy MIXED value will need to be updated to handle the new channel enums.

What to watch:

  • Channel data is only available for dates starting June 1, 2025.
  • Asset group–level channel reporting remains API-only and won’t appear in the Google Ads UI.

Bottom line. The latest Google Ads API release quietly delivers one of the biggest Performance Max updates yet — turning a black-box campaign type into something advertisers can finally analyze by channel.

How to build a modern Google Ads targeting strategy like a pro

Search marketing is still as powerful as ever. Google recently surpassed $100 billion in ad revenue in a single quarter, with more than half coming from search. But search alone can no longer deliver the same results most businesses expect.

As Google Ads Coach Jyll Saskin Gales showed at SMX Next, real performance now comes from going beyond traditional search and using it to strengthen a broader PPC strategy.

The challenge with traditional Search Marketing

As search marketers, we’re great at reaching people who are actively searching for what we sell. But we often miss people who fit our ideal audience and aren’t searching yet.

The real opportunity sits at the intersection of intent and audience fit.

Take the search [vacation packages]. That query could come from a family with young kids, a honeymooning couple, or a group of retirees. The keyword is the same, but each audience needs a different message and a different offer.

Understanding targeting capabilities in Google Ads

There are two main types of targeting:

  • Content targeting shows ads in specific places.
  • Audience targeting shows ads to specific types of people.

For example, targeting [flights to Paris] is content targeting. Targeting people who are “in-market for trips to Paris” is audience targeting. Google builds in-market audiences by analyzing behavior across multiple signals, including searches, browsing activity, and location.

The three types of content targeting

  • Keyword targeting: Reach people when they search on Google, including through dynamic ad groups and Performance Max.
  • Topic targeting: Show ads alongside content related to specific topics in display and video campaigns.
  • Placement targeting: Put ads on specific websites, apps, YouTube channels, or videos where your ideal customers already spend time.

The four types of audience targeting

  • Google’s data: Prebuilt segments include detailed demographics (such as parents of toddlers vs. teens), affinity segments (interests like vegetarianism), in-market segments (people actively researching purchases), and life events (graduating or retiring). Any advertiser can use these across most campaign types.
  • Your data: Target website visitors, app users, people who engaged with your Google content (YouTube viewers or search clickers), and customer lists through Customer Match. Note that remarketing is restricted for sensitive interest categories.
  • Custom segments: Turn content targeting into audience targeting by building segments based on what people search for, their interests, and the websites or apps they use. These go by different names depending on campaign type—“custom segments” in most campaigns and “custom search terms” in video campaigns.
  • Automated targeting: This includes optimized targeting (finding people similar to your converters), audience expansion in video campaigns, audience signals and search themes in Performance Max, and lookalike segments that model new users from your seed lists.

Building your targeting strategy

To build a modern targeting strategy, you need to answer two questions:

  • How can I sell my offer with Google Ads?
  • How can I reach a specific kind of person with Google Ads?

For example, to reach Google Ads practitioners for lead gen software, you could build custom segments that target people who use the Google Ads app, visit industry sites like searchengineland.com, or search for Google Ads–specific terms such as “Performance Max” or “Smart Bidding.”

You can also layer in content targeting, like YouTube placements on industry educator channels and topic targeting around search marketing.

Strategies for sensitive interest categories

If you work in a restricted category such as legal or healthcare and can’t use custom segments or remarketing, use non-linear targeting. Ignore the offer and focus on the audience. Choose any Google data audience with potential overlap, even if it’s imperfect, and let your creative do the filtering.

Use industry-specific jargon, abbreviations, and imagery that only your target audience will recognize and value. Everyone else will scroll past.

Remember: High CPCs aren’t the enemy

Low-quality traffic is the real problem. You’re better off paying $10 per click with a 10% conversion rate than $1 per click with a 0.02% conversion rate.

When evaluating targeting strategies, focus on conversion rate and cost per acquisition, not just cost per click.

Search alone can’t deliver the results you’re used to

By expanding beyond traditional search keywords and using content and audience targeting, you can reach the right people and keep driving strong results.

Watch: How to build a modern targeting strategy like a pro + Live Q&A

💾

Learn a practical PPC framework that predicts intent, reaches beyond search, and connects the right audiences to the right content.
❌