Normal view

Yesterday — 8 April 2026Search Engine Land

HubSpot rebrands its flagship conference from Inbound to Unbound

8 April 2026 at 21:49
Inbound Unbound

If you shelved your inbound strategy this past year, you can shelve your Inbound conference mugs and swag with it.

HubSpot renamed its annual Inbound conference in Boston this September to Unbound. A note on the event site explains the thinking:

  • “This evolution is our response to that reality. INBOUND is becoming UNBOUND because growth no longer fits within a single framework or function. Today, it covers marketing, sales, service, and operations across the full customer journey in an AI-driven environment. UNBOUND reflects that expanded reality and the mindset required to lead through it.” 

Inbound is outbound. HubSpot pioneered inbound marketing, which uses content and search rankings to attract visitors, then convert them on-site.

Recent Google core updates appeared to hurt the HubSpot blog, possibly because its content drifted from core topics like CRM, sales, and marketing into broader business areas like interview tips.

Inbound strategy has declined as search shifts from platforms like Google to LLMs like ChatGPT, which drive fewer clicks to websites.

From inbound to loop marketing. In 2025, HubSpot introduced its Loop marketing strategy to replace inbound. Loop focuses educating consumers in an AI-driven world.

The conference rebrand acknowledges that no single framework works for you in today’s marketing landscape.

AI bot traffic surged 300%, hitting publishers hardest: Report

8 April 2026 at 20:19
AI bot traffic surge

AI bot activity surged 300% in 2025, with media and publishing among the most targeted sectors, according to a new Akamai report.

Why we care. AI bots are reshaping how content is discovered and consumed, shifting users from search clicks to instant answers in chat interfaces. Publishers are seeing fewer visits from organic search and often don’t get attribution in AI-generated answers. It’s also eroding ad and subscription models.

The threat is real. Publishers now face two threats:

  • Training bots that ingest content for models.
  • Fetcher bots that extract real-time content for immediate answers. These pose the bigger risk because they capture value as it’s created.

The impact. Pageviews are declining, costs are rising (because scraping bots increase infrastructure costs by consuming server and CDN resources without generating revenue), and brand visibility is weakening.

  • AI chatbot referrals drive ~96% less traffic than traditional search
  • Users click cited sources in AI answers only ~1% of the time

What publishers are doing. Publishers are adopting nuanced controls (rather than blanket blocking AI bots), such as:

  • Monitoring and classifying bot traffic.
  • Selectively blocking or slowing malicious scrapers (e.g., tarpitting).
  • Allowing approved bots tied to licensing or partnerships.

What they’re saying. According to Akamai’s report:

  • “These bots are not just a security nuisance, they represent a profound business challenge that threatens the sustainability of quality journalism in an age dominated by zero-click searches and AI-generated content.”
  • “The publishing industry today faces an existential crisis … Many readers and visitors still value trustworthy reporting and original content. Yet, instead of clicking through search results, users now turn to AI-driven platforms like ChatGPT and Gemini for instant answers and summaries.”

What’s next? A “pay-per-crawl” model is emerging. Tools like identity verification (Know Your Agent) and platforms like TollBit aim to authenticate bots and charge for access in real time.

  • The goal is to turn scraping into a measurable, monetizable transaction instead of uncontrolled extraction.

About the data. The report analyzed Akamai bot management data from July to December 2025, covering application-layer traffic across websites, apps, and APIs.

The report. SOTI Security Insight Series: Navigating the AI Bot Era (registration required)

Google tests swipeable location carousel in search ads

8 April 2026 at 19:22
Google Ads tactics to drop

Google may be making local search ads more interactive, potentially changing how advertisers showcase multiple locations and capture nearby demand.

What’s happening. Google Ads appears to be testing a new format that displays multiple business locations in a swipeable carousel within search ads, allowing users to browse options directly in the ad unit.

How it works. Instead of listing locations separately, the new format groups them into a horizontal carousel with business details like ratings and proximity, enabling users to swipe through locations without leaving the search results page.

Zoom in. Early comparisons show a shift from static, stacked location assets to a more dynamic experience, where multiple listings are consolidated into a single, scrollable unit.

Why we care. Advertisers with multiple locations could gain more visibility within a single ad, while users get a quicker way to compare nearby options.

Between the lines. This format could increase engagement with location-based ads, but may also intensify competition within the carousel itself as businesses vie for attention.

What to watch. Whether the feature rolls out more broadly and how it impacts click-through rates and local ad performance.

First spotted. This update was spotted by Founder of Adsquire Anthony Higman who shared spotting this ad type on LinkedIn.

Google launches developer hub for ads and measurement tools

8 April 2026 at 19:02
How to use Performance Planner and Reach Planner in Google Ads

Google is consolidating its advertising and measurement resources into a single destination, aiming to make it easier for developers and technical marketers to build, automate and scale campaigns.

What’s happening. Google has introduced a new Advertising and Measurement Developers Hub, a centralized site designed to help users access tools, documentation and support across its ad ecosystem.

The hub brings together resources for products like the Google Ads API, Google Analytics and publisher tools such as AdMob and Google Ad Manager, all organized into categories including advertising, tagging and measurement.

How it works. The site offers a streamlined homepage with quick access to documentation, blog updates and community channels, along with dedicated sections to explore products, connect with support and engage with Google’s developer relations team.

Why we care. Google is making it easier to access and implement advanced tools that power automation, tracking and campaign optimization. This can help teams work more efficiently, especially those relying on APIs, tagging and data integrations. As advertising becomes more technical and AI-driven, having a centralized hub lowers the barrier to building more sophisticated, scalable setups.

The big picture. As advertising becomes more automated and API-driven, Google is investing in infrastructure that supports developers and technical users who manage complex integrations across platforms.

Zoom in. New features include a “meet the team” section, a centralized support page linking to Discord and GitHub resources, and a media hub featuring content like Ads DevCast.

What to watch. Whether this hub becomes the primary entry point for developers working across Google’s ad products — and how it evolves with new AI and measurement tools.

Bottom line. Google is simplifying access to its ad tech ecosystem, betting that better developer support will drive more innovation and adoption.

Dig deeper. Introducing the Google Advertising and Measurement Developers Hub!

Audit your agency: 6 questions to find a true growth partner

8 April 2026 at 19:00
Audit your agency- 6 questions to find a true growth partner

Most agencies present prospective clients with an account audit as part of their sales process. The purpose is twofold: 

  • To provide immediate value (usually without strings attached).
  • To demonstrate that they know their stuff.

But how often do brand marketers turn the tables and audit their agencies in their RFP?

I’m the head of performance marketing at a marketing agency, so I’m clearly writing from a biased perspective. However, over my decade-plus in the industry, I’ve seen too many brands settle for “good enough” because they didn’t know which questions would reveal the cracks in a potential partner’s strategy and approach.

If I were a brand looking for a true growth partner, here are the specific questions I’d ask to separate the top performers from the rest.

1. What are your key services, and what percentage of your clients utilize each?

A lot of agencies claim to be “full service,” but rarely are they “full excellence.” I’d be looking for where an agency truly spends its time versus where they’re just trying to upsell me.

It’s less about the channels in question (although if, say, LinkedIn is a key growth driver for your brand, they’d better demonstrate proficiency there), and more about how their strengths align with your needs.

If an agency claims to be experts in SEO, creative strategy, and paid media, but 90% of their client base only uses them for paid search, that’s a red flag. You want a partner whose core competencies align with your primary needs. 

If you need high-volume creative testing, you want an agency where 80%+ of clients use its creative production frameworks, not one that treats creative as an add-on service.

Dig deeper: Confessions of a PPC-only agency: Why we finally embraced SEO

2. How are you approaching AI-driven account optimization and platform automation?

I miss the days when knowledge of the manual controls at your disposal could set you apart as a high-performing marketer. But those days have been gone for a while.

In 2026, there’s a real danger of over-optimization with the controls we have left. This can reset algorithmic learnings and prevent them from fine-tuning in service of your goals. Agency teams that strike this balance most certainly have a healthier approach than those who either blindly trust algorithms or can’t help tinkering excessively.

One control you can and must be diligent about using is first-party data for enhanced conversions and offline conversion tracking. Part of the job of a great marketer is training the algorithms on which leads and which conversions to target, and first-party data is a huge lever to pull in that regard.

3. What is your reporting process and what KPIs do you focus on for the majority of your clients?

Don’t just ask for a sample report. Anyone can make a PDF look pretty. You need to understand their philosophy on data.

You’re looking for an agency that’s willing to move upstream. If the majority of their clients are measuring success on clicks, traffic, or even MQLs, run the other way.

A performance-driven agency should be obsessed with revenue, ROAS, and pipeline velocity. Ask them how they handle attribution. If they rely solely on in-platform metrics, which often over-claim credit, they aren’t looking at the full picture. 

Dig deeper: What successful brand-agency partnerships look like in 2026

Get the newsletter search marketers rely on.


4. What’s the average industry tenure of the team on my account?

This is actually a pretty common question and has been for years. Too many marketers know the pain of integrating rotating sets of agency teams because the agency can’t hold onto top employees, and you should be evaluating the answer from this perspective.

There’s another factor to consider. Generally speaking, the more experienced a marketing team is, the more effectively it uses AI tools.

Whereas junior marketers might be more avid proponents of AI and quicker to adopt its functionality, they’re also far more likely to use it for things like creative ideation and strategy. Both are areas where high-quality human thought is a true differentiator.

For this answer specifically, remember that you have some great research tools like Glassdoor that you can and should access. Employee tenure is one thing, but a Glassdoor profile with a bunch of red flags is an indicator that the agency might struggle to keep the talent it really wants to retain.

5. How is your team using AI on client accounts?

Again, you’re looking for a balance here. Agency teams that don’t use AI at all are almost certainly burning resources on manual tasks, but agency teams that overuse it to replace perspective, critical thinking, and creativity are commoditizing their own client service.

Two follow-up questions to ask:

  • What is your governance structure for AI use?
  • What’s your process for QAing AI output?

You’re looking for firm answers and redundant layers for each of these questions — at the very least, someone relatively senior should approve any output before it goes live.

Dig deeper: Why PPC teams are becoming data teams

6. When you take over an account, what are the first things you do to save budget without affecting growth?

This is the ultimate litmus test for technical proficiency. A great performance marketer knows where the ad platforms hide the waste buttons. If I were a brand marketer, I’d want to hear about:

  • Any harmful default settings that need to be turned off.
  • What inputs are driving wasted spend (audiences, networks, keywords, etc.).
  • A plan to prioritize budget around what’s driving business outcomes.

If an agency can’t rattle off these specific checks, they’re likely missing the “low-hanging fruit” of budget efficiency. Fixing some of these takes seconds, but missing them costs thousands.

What separates a true growth partner from the rest

Remember: when you’re choosing an agency partner, it’s the job of each agency to sound as good as they possibly can, but what an agency considers to be a great answer might not be a great fit for your brand. 

By focusing on utilization rates of services, strategic application of AI, and approaches to budget efficiency, you’ll find a partner capable of driving actual performance, not just spending your budget.

Dig deeper: How to find your next PPC agency: 12 top tips

Google rolls out onboarding guide for Universal Commerce Protocol

8 April 2026 at 18:44
Google Ads may be over-crediting your conversions- A 7-day test tells a different story

Google is laying the groundwork for “agentic commerce,” where users can complete purchases directly inside AI-driven search experiences.

What’s happening. Google has published a new onboarding guide for its Universal Commerce Protocol (UCP) in Merchant Center, outlining how merchants can integrate with the system and enable checkout directly from product listings in AI Mode and Gemini.

The big picture. As AI search evolves from discovery to transaction, Google is pushing to keep users within its ecosystem by embedding shopping and checkout into conversational experiences.

How it works. Merchants must first complete a technical integration, then submit an interest form and wait for approval before gaining access to onboarding tools in Google Merchant Center, including a sandbox environment to test integration, identity linking and checkout APIs.

Why we care. Google is moving search closer to transaction, meaning users may complete purchases directly inside AI experiences instead of visiting your website. This shifts where conversions happen and could change how performance is measured, attributed and optimized. Early adopters of the Universal Commerce Protocol may gain a competitive advantage as shopping becomes more integrated into tools like Gemini.

Zoom in. The protocol acts as an open standard for connecting product data, user identity and payment flows, enabling seamless purchases without redirecting users to external sites.

What to watch: The rollout is gradual and currently limited to the U.S., with a dedicated UCP integration tab expected to appear in Merchant Center accounts over the coming months.

Bottom line. If widely adopted, the Universal Commerce Protocol could redefine how online shopping works — turning search into a full-funnel, AI-powered checkout experience.

Dig deeper. How to onboard to the Universal Commerce Protocol in Merchant Center

Meta simplifies Pixel setup with official Google Tag Manager template

8 April 2026 at 18:29
How to test UGC and EGC ads in Meta campaigns

Meta Platforms is making it easier for advertisers to implement tracking, reducing technical friction for teams running campaigns across platforms.

What’s happening. Meta released an official Pixel template inside Google Tag Manager, replacing the need for third-party or community-built workarounds.

How it works. The new template allows advertisers to reuse their existing GA4 dataLayer, meaning events already configured for Google Analytics 4 can be leveraged without rebuilding tracking from scratch. It also automatically maps enhanced e-commerce events such as purchases, add-to-cart actions, content views and checkout initiations, eliminating the need for duplicate tagging.

Why we care. This reduces implementation time, lowers the risk of tracking errors and ensures consistency across platforms, especially for advertisers managing both Google and Meta campaigns.

What to watch. Whether this leads to broader adoption of Meta Pixel tracking among advertisers who previously avoided complex setups, and if similar cross-platform integrations follow.

Bottom line. Meta is removing one of the biggest headaches in ad tracking — making it faster and easier to get reliable data across platforms.

First seen. This update was spotted by Paid Media expert Thomas Eccel who shared spotting the update on LinkedIn.

Why product feeds need an organic strategy for AI search

8 April 2026 at 18:00
Why product feeds need an organic strategy for AI search

Ask most ecommerce brands who owns their product feed, and the answer is almost always the same: the paid media team.

Maybe a feed management tool sits under PPC. Maybe the shopping team built the feed years ago, and nobody’s touched the titles since. Either way, SEO rarely has a seat at the table, and it’s often forgotten as part of the broader feed management strategy.

Whether you’re worried about AI search or traditional clicks, you’re missing out on opportunities by excluding SEO from your feed management strategy.

AI shopping results are grounded in Google Shopping data

Up to 83% of ChatGPT carousel products match Google Shopping’s organic results, according to a recent Peec AI study analyzing more than 43,000 listings. And 60% of those matches came from Shopping positions 1-10.

carousel-products
Data shows how ChatGPT’s product carousel matches Google Shopping’s organic results, with Google dominating over Bing.

On Google’s side, the Shopping Graph now contains more than 50 billion product listings and feeds directly into AI Overviews, AI Mode, and Gemini. AI Overviews appear in roughly 14% of shopping queries, up from about 2% in late 2024. Like many other things we’ve discovered about AI search, the generative results are informed by traditional SERP.

SEO needs to be the strategic quarterback for brand authority. This is a highly valuable opportunity to work cross-channel toward a common goal of improving visibility across search surfaces. It really requires SEOs, commerce, and paid media teams to get in the same room.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

The case for a dedicated organic feed

Typically, brands run a single product feed optimized for Google paid shopping campaigns. Titles are written for bid relevance, descriptions are built for Quality Score, and the feed exists to win auctions, with less consideration for user search behaviors.

As user behavior shifts, search surfaces favor stronger semantic alignment between queries and product data. A title stuffed with paid-friendly modifiers or branded terms isn’t the same as a title that mirrors how someone conversationally searches for a product.

We tested this with a large ecommerce brand. Our agency’s AI SEO team partnered with the commerce team to launch a dedicated product feed for free organic listings, with titles and descriptions optimized specifically for organic visibility, rather than replicating what was already running in the paid feed.

After the organic feed was pushed live:

  • Organic listing CTR increased 10% month over month, alongside a 4% lift in purchasing rate.
  • A product-level test saw a 92% increase in revenue for free listings, with visibility up 83%, and add-to-cart up 14%.
  • The organic optimization changes alone drove 35,000 impressions at a 1.4% CTR, 55% higher than the CTR seen in paid for the same time period.

Rather than replacing our paid feed strategy, we recognized that organic and paid shopping solve different problems and have different needs that require optimizing accordingly.

Organic feed titles should reflect how your customers actually search, not how your bidding strategy is structured.

Dig deeper: How AI-driven shopping discovery changes product page optimization

Get the newsletter search marketers rely on.


What to prioritize in an organic feed strategy

Not every feed attribute carries equal weight. If you’re building a dedicated organic feed or just auditing your existing feed for gaps, here’s where you could start.

Titles are the highest-impact lever

Google’s algorithm heavily favors feed titles when matching products to queries, and its own documentation emphasizes including important attributes to “better match search queries and drive performance lift.” Consider how a customer might describe what they’re looking for in a conversational way, and how that aligns with product attributes.

Google's Merchant Center documentation on feed strategy
Google’s Merchant Center documentation reinforces the point that your feed strategy should map to how your customer actually shops to help improve their search journey

Global Trade Item Numbers (GTINs) are non-negotiable

Google’s GTIN documentation makes clear that products with correct GTINs receive significantly more visibility. Industry data has consistently shown that properly matched products can drive up to 40% more clicks. They’re also the primary signal for aggregating product reviews across sources.

Don’t overlook images

They’re still the most common source of Merchant Center disapprovals. Products with both standard and lifestyle images typically see significantly higher engagement. 

If budget or bandwidth has kept better product images on the back burner, Google’s Product Studio can help handle some of the editing, so you can test and improve creative at scale without a full reshoot. It’s also a way for SEO and creative teams to collaborate on feed-specific assets and testing.

Optimize key product attributes: product_highlight and product_detail 

  • product_highlight lets you add scannable benefit statements that appear in expanded Shopping views. For instance, “water-resistant for light rain commutes” is doing more work than “high-quality material” for both the shopper and the AI. 
  • product_detail provides structured specifications that power Google’s faceted filters in organic product grids.

The same semantic work SEOs are doing to optimize product detail pages (PDPs) for conversational search — like defining ideal buyers, naming use cases, and articulating compatibility — should inform feed attributes. 

Product and content teams already understand what drives someone to buy. That context should be in the feed, not just on a brand’s PDPs.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Your feed is also your agentic commerce foundation

Here’s what makes this investment compound: the feed optimization work done today for organic shopping visibility will also help build brand readiness for agentic commerce standards and applications.

Google’s Universal Commerce Protocol, announced in January, is a framework that enables AI agents to discover products, build carts, and complete transactions directly inside AI Mode and Gemini. The shopper may never land on the brand website to make a purchase. UCP isn’t a replacement for Google Merchant Center, because it’s built directly on top of GMC data.

Feeds are how products enter the Shopping Graph. The Shopping Graph is the dataset AI agents query when processing a shopping request. The new native_commerce attribute added to feeds is what signals that a product is eligible for the UCP-powered “Buy” button in traditional and AI-driven Google services.

Google has also announced the eventual rollout of several new Merchant Center attributes designed specifically for conversational commerce: 

  • Product FAQs.
  • Use cases.
  • Compatible accessories.
  • Product substitutes. 

These are additions to an existing GMC feed that give AI agents the contextual understanding they need to match products to natural-language queries like “what’s a good waterproof jacket for bike commuting?” These new conversational attributes are rolling out to a small group of retailers first.

This is where feed data and on-page content need to stay tightly aligned. Search surfaces cross-reference a brand’s feed against:

  • Structured data. 
  • PDP content.
  • Other sources to validate findings. 

When those layers contradict each other, trust erodes at the domain level. 

Dig deeper: 7 organic content investments that drive ecommerce ROI

Building a cross-channel strategy for AI search

Product feed strategy and optimization is an opportunity for genuine cross-team collaboration to test, execute, and measure visibility. A holistic approach to managing product details across every surface will benefit brands in both traditional and AI-driven search.

  • SEOs bring the keyword intelligence, semantic understanding, and knowledge of how AI systems match queries to content. 
  • Commerce and marketplace teams own the product data, product information management, and relationships with retailers. 
  • Paid teams have the feed infrastructure, the tools, and years of experience managing feed health at scale.

These teams must work together to coordinate their insights and effectively establish an AI SEO operating system. The product feed sits at that intersection as it’s an owned asset managed by commerce infrastructure that directly feeds AI-powered visibility.

The first step is to pull a current feed and compare organic titles to paid titles. The second step is getting the right people in the room to build something better. SEO is most successful when more channels align toward the same goal: better brand visibility.

Google March 2026 core update rollout is now complete

8 April 2026 at 17:35

The March 2026 core update finished rolling out today after 12 days and 4 hours, completing Google’s first broad ranking update of the year.

What happened. Google confirmed the rollout ended at 06:12 PDT, per its Search Status Dashboard. The update began March 27 and impacted search rankings globally.

  • Google previously said this was “a regular update designed to better surface relevant, satisfying content for searchers from all types of sites.”

The timeline. Google originally estimated the March 2026 core update would take up to two weeks to complete.

  • Started: March 27.
  • Completed: April 8.
  • Total rollout: 12 days, 4 hours

The context. This was the first core update of 2026. It followed the March 2026 spam update and the February 2026 Discover update.

  • Core updates introduce broad changes to ranking systems and typically drive noticeable volatility across search results.

What to do if you were impacted. Google didn’t issue any new guidance for the March 2026 core update. Its standing advice remains:

  • Ranking drops don’t necessarily mean something is wrong.
  • Recovery often comes with future updates, not immediate fixes.
  • Focus on helpful, reliable, people-first content.

Google continues to point site owners to its core update and helpful content guidance.

Why we care. Now that the rollout is complete, you can assess impact with more confidence. Analyze ranking and traffic changes, identify winners and losers, and adjust your content strategy based on what the update appears to reward.

Previous core updates. Here’s a timeline and our coverage of recent core updates:

How AI search defines market relevance beyond hreflang

8 April 2026 at 17:00
How AI search defines market relevance beyond hreflang

Hreflang has long been a core mechanism in international SEO, directing users to the right regional version of a page. That approach worked when search engines primarily returned static results. 

AI-driven synthesis changes that. Instead of returning lists of links, AI systems construct answers. They don’t need, nor want, your perfectly implemented hreflang tags. They aren’t looking for instructions on which page to serve. They’re trying to determine which answer is best supported across sources.

Your content has to hold up when the model compares it against everything it’s seen, regardless of language or origin. If it doesn’t, it won’t be used.

What hreflang does and doesn’t do

We need to address a fundamental misunderstanding of the hreflang attribute. Hreflang has always been a switcher, not a booster. 

If your brand lacked organic authority in Australia before implementing the tag, adding the en-au attribute wouldn’t magically improve your rankings in Sydney. Its only function was to ensure that if you did rank, the user saw the correct regional version.

In AI search, this “you vs. you” dynamic has become a liability. While traditional search still relies on these tags to organize traffic, AI models often bypass them during the synthesis phase. If a brand’s U.S.-based .com site possesses decades of authority, the AI’s internal logic may determine that the U.S. site is the true source of information. 

Consequently, even when a user in Berlin searches in German, the AI may synthesize an answer based on the U.S. data and simply translate it on the fly, effectively ghosting the brand’s localized German site despite perfectly implemented hreflang tags.

The double-blind: Query fan-out vs. entity compression

AI models don’t just answer the query you see. They expand it into dozens of hidden checks, comparing sources, validating claims, and pulling in information across languages to see what aligns.

ChatGPT often translates and evaluates queries in English even when the user searches in another language, research from Peec AI shows. This reinforces how query fan-out operates across markets. If your local entity doesn’t hold up in that broader comparison, it doesn’t get used.

A second issue happens before retrieval even begins. During training, LLMs compress what they see so it can be stored and reused at scale.

When multiple regional pages look too similar, they don’t stay separate. They’re folded into a single representation, also known as canonical tokenization.

Local details — phone numbers, office locations, and market-specific references — don’t always survive that process. They’re treated as minor variations rather than meaningful signals.

By the time the model is asked a question, your local site is often no longer competing. In many cases, it’s already been absorbed into the global one.

Dig deeper: What the ‘Global Spanish’ problem means for AI search visibility

7 ways to build AI-first relevancy

To compete globally, expand your strategy to include signals that resonate with AI’s data supply chain.

1. Build locally aligned infrastructure

Meta tags tell systems what you intend. Infrastructure often tells them what to believe. Datasets like Common Crawl use geographic heuristics, IP location, and domain structure to make sense of content at scale. That happens early in the process, before anything resembling ranking.

This means your content may already be placed in a market before the model ever evaluates it. If your regional domains aren’t supported by local infrastructure or delivery, you’re sending mixed signals. Those are hard to recover from later.

2. Break the compression threshold

To break the semantic gravity that leads to entity compression, you need what I would call a clear “knowledge delta.” Most global teams fail here because they think localization means translation. It doesn’t. 

There’s no universally accepted magic number for unique content. From a semantic vector perspective, I speculate that a divergence threshold of at least 20% of the content on a local page must be unique to prevent the model from collapsing your local identity into your global one.

To address this, front-load market-specific data, such as regional shipping logistics, local tax identifiers, and native case studies, into the first 30% of your page. This lets you provide the mathematical proof the model needs to cite your local URL as a distinct authority.

3. Anchor your entity in semantic neighborhoods

AI models interpret market relevance by looking at the company you keep in the text. Incorporate geographic anchoring by referencing local neighborhoods, regional landmarks, or specific transit hubs (e.g., “located near the Alexanderplatz station” in Berlin). 

These co-occurrence signals pull your brand’s vector embedding toward the specific local coordinate in the model’s training data, creating a geographic fence that helps the AI disambiguate your local office from your global headquarters.

Dig deeper: How to craft an international SEO approach that balances tech, translation and trust

Get the newsletter search marketers rely on.


4. Prioritize local link sources

The origin of your links is a primary signal of market authority. During the fan-out phase, AI models look for regional consensus.

This is one of the areas where traditional link building logic starts to break. It’s not just about getting links. Consider where those links originate, along with their authority and contextual relevance.

If your Australian page has backlinks primarily from U.S.-based websites, the model has little evidence that you actually belong in or are relevant to the Australian market. Local sources, including high local trust and location-specific news outlets, change that. Without them, you’re often treated more like a visitor than a participant.

5. Incorporate linguistic and authoritative nuances

LLMs pick up on regional language nuances far more than most teams expect. This is where simple translation starts to break down. Unique market- or colloquial-specific terms, formatting, and even small legal references signal whether something actually belongs in a market.

Use the terms people in that market actually use — things like “incl. GST,” local identifiers like ABN, and even spelling differences. Without these signals, the page may be technically and linguistically correct, but it won’t register as truly local.

6. Capture the invisible long-tail

As mentioned, LLMs often generate multiple incremental queries during their research phase. These invisible queries may focus on local friction points, such as “How does this product comply with [name of local regulation]?” 

By incorporating local FAQ clusters that address these nuances, you ensure your local URL survives the fan-out check, making your global .com too generic to be cited in a localized answer.

Dig deeper: Why AI optimization is just long-tail SEO done right

7. Run AI citation audits

Expand your SEO reporting beyond traditional rank tracking. Incorporate AI citation audits by using a local VPN to query the most popular generative engines in your target markets. 

If the AI consistently pulls from your global .com domain for a local query, it’s a clear signal that your local domain lacks the necessary evidence chain. Identify where this market drift is occurring and reinforce those specific pages with more unique local data and infrastructure signals.

The new international standard

Hreflang and traditional technical signals still shape how search engines organize and deliver content, but they don’t determine what AI systems use.

AI models evaluate which sources to use based on evidence of local relevance. Without a distinct presence in each market, they default to the version of your brand they trust most, which often isn’t the one you intended.

Translation alone doesn’t establish that presence. Your content needs to demonstrate that it belongs in the market it’s meant to serve.

Dig deeper: Multilingual and international SEO: 5 mistakes to watch out for

Why audience engineering is replacing manual targeting in paid media

8 April 2026 at 16:00
Audience engineering

You’re facing a major shift as familiar manual targeting levers disappear in favor of AI-driven discovery. Platforms’ automated tools are collapsing campaign types, obscuring data, and replacing manual targeting with intent-based algorithms.

This is a shift from selection to prediction. You won’t adapt by holding onto old controls — you’ll adapt by learning to engineer the inputs that replace them. Here’s how to make sure you have the tools to stay on top.

The end of manual targeting as you knew it

You previously relied on granular keyword lists, demographic filters, and custom exclusions to target ideal customers. You told platforms exactly who to target and paid to access that inventory.

Now, platforms have eliminated those controls:

  • Google collapsed campaign types into Performance Max, removing keyword-level targeting in favor of “asset groups” and “audience signals” — suggestions, not directives.
  • Meta launched Advantage+, automating demographic and interest targeting so your role shifts from selector to signal provider.
  • Microsoft extended the same model to Bing, confirming this is an industry-wide shift, not a single-platform experiment.

Targeting didn’t disappear — it moved inside the platform’s black box. The algorithm now targets based on data within its own ecosystem.

Platforms are clear: manual segmentation is gone, and automation is here to stay.

The rise of audience engineering

If targeting is now internal to the algorithm, your role changes. It’s less about selecting your audience and more about engineering it.

From targeting to teaching

The distinction is critical. Traditional targeting focused on selecting audiences. Audience engineering focuses on instructing the algorithm through high-quality conversion signals, precise creative, and first-party data. It teaches AI systems who to find and what to optimize for.

Here’s how this changes your workflow:

In the past, to target CFOs, you might use job title filters and negative keyword lists. With audience engineering, you instead upload high-quality data (e.g., “deal closed” signals) to define a high-value prospect. You also tailor creative to CFO-specific pain points, teaching the AI to reach people who engage with that message.

The new competitive discipline

If you fight the algorithm and resist this shift, you’ll struggle. If you embrace it, you’ll succeed by optimizing conversion signals, refining creative, and strengthening your data infrastructure.

As manual levers disappear, the gap between strong and average performance comes down to signal quality. Audience engineering is what closes that gap.

The three levers that now drive targeting

You must optimize three critical inputs the AI uses to segment for you:

1. Conversion signal quality

Tell the algorithm what matters. If you optimize for cheap, top-of-funnel leads, it will get efficient at finding people who fill out forms but never buy — that’s not what you want.

Focus on meaningful business outcomes, not top-of-funnel metrics. Integrate Offline Conversion Imports (OCI) and Conversions API (CAPI) to feed data on final sales, not just initial clicks. With value-based bidding, you teach the algorithm to prioritize users who drive revenue — effectively targeting high-value customers without using demographic checkboxes.

2. Creative as a targeting mechanism

In a world without demographic filters, your creative becomes your primary targeting mechanism. The specificity of your message does the filtering.

If your creative speaks broadly, the AI shows it broadly. If it speaks to a niche pain point, the AI finds users who resonate with that pain point.

Build ad sets around motivations, not product categories.

3. First-party data as competitive moat

Your customer lists, CRM data, and engagement signals are the foundation the algorithm learns from. 

This data replaces third-party signals and becomes a critical competitive advantage. You’re giving the algorithm a cheat sheet to identify your best customers.

How this plays out in real campaigns

The shift to AI-driven targeting isn’t theoretical. As an agency managing over $215 million in annual paid media spend, we’ve tested this across platforms and validated it with performance data. Here’s what we’ve learned:

Advantage+ Audiences in practice

A long-time client had a well-established view of its target audience based on years of campaign performance and customer data. Campaigns used manual age caps and layered targeting to protect efficiency.

When we transitioned those campaigns to Advantage+ Audiences, manual exclusions were removed, allowing the algorithm to optimize based purely on conversion signals and creative performance.

During testing, Meta identified and scaled into an older demographic that had previously received minimal budget. This segment delivered a 37% higher CTR than the campaign average and drove stronger downstream conversion performance.

As spend shifted into this audience, conversions came at a lower cost per result while total revenue increased. Broader targeting improved return on ad spend (ROAS) compared to the prior manual strategy.

This reflects a broader trend with Advantage+ Audiences. Paired with strong conversion goals, accurate data signals, and high-quality creative, it consistently identifies high-value segments that manual targeting restricts or misses.

Microsoft PMax Placement Transparency and Advanced Audience Signal Targeting

For another client, we implemented a Microsoft PMax test, using advanced audience targeting and first-party data to reach high-intent prospects across Bing, Outlook, MSN, and the Microsoft Audience Network.

With in-platform placement insights, we monitored performance closely and reacted quickly early on. The campaign drove a 10% increase in conversion rate, a 14% decrease in cost per lead, and a 4x increase in form fills in the first month — followed by another 2x the next month.

This reinforced a key principle: automation performs best with strategic human oversight. While we fed strong audience signals and conversion data, performance drifted as the system expanded into less efficient placements. With Microsoft support and ongoing monitoring, we excluded underperforming placements and refined targeting without over-constraining the campaign.

By letting PMax handle scale and optimization — while maintaining disciplined oversight and guardrails — we preserved efficiency and improved overall performance.

The risks nobody is talking enough about 

Automated targeting is powerful, but not benevolent. It optimizes for the math you give it. Here are pitfalls to avoid.

Garbage in, garbage out

This is the most important risk. Poorly defined conversion events, incomplete data pipelines, or low-quality first-party data limit performance and train the algorithm on the wrong outcomes.

If you feed it noise, it will scale that noise — wasting budget on low-quality traffic.

If your goal is too broad or lacks strong quality signals, the algorithm will maximize volume, even when that volume doesn’t drive real business value.

The self-reinforcement trap

If your seed data is biased, the AI will keep optimizing toward that bias — potentially missing valuable adjacent audiences. This “sampling bias” in training data is a real, underappreciated risk in automated systems.

Automation without oversight

Platforms have a financial incentive to push broader automation. Without your oversight and willingness to intervene, campaigns can drift from your business goals. “Set it and forget it” fails. You need to monitor campaigns and nudge them back on track when they drift.

Creative complacency

As targeting automates, creative becomes your primary differentiator. Neglect it and you lose.

Build creative that directly answers your audience’s pain points. Stand out.

How to put audience engineering into practice

So how do you operationalize this? Here are three steps to start engineering your audiences today:

  • Audit conversion events. Review what you’re asking platforms to optimize for. Make sure your signals reflect real business outcomes like revenue.
  • Restructure creative around intent signals. Ask: what does someone need to believe to convert? Let that drive your messaging. Build asset groups around specific barriers or desires to push the AI to find people who hold those beliefs.
  • Set guardrails before you let the algorithm learn. Automation works best within clear boundaries. Define performance thresholds before launch. Monitor for audience drift and intervene when results diverge from your goals. AI is a tool, not a replacement for strategy.

The future belongs to audience engineers

The era of manual targeting is over, but precision matters more than ever. Audience engineering is your competitive advantage. By teaching algorithms who to target and what matters, you unlock AI’s full potential and win in this evolving landscape.

Google Ads adds “Results” tab to show impact of recommendations

8 April 2026 at 00:57
How to tell if Google Ads automation helps or hurts your campaigns

Google is giving advertisers new visibility into whether its automated recommendations actually drive performance — a long-standing blind spot in the platform.

What’s happening. A new “Results” tab within Recommendations shows the incremental impact of bidding and budget changes after they’ve been applied, allowing marketers to evaluate outcomes instead of relying on assumptions.

How it works. The feature attributes performance changes to specific recommendations, helping advertisers understand what effect adjustments like budget increases or bid strategy shifts had on results.

Why we care. Marketers can now validate whether recommendations improved performance, making it easier to decide which automated suggestions are worth adopting in the future.

Between the lines. Google has a vested interest in encouraging adoption of its recommendations, so providing performance data could build trust — but it also raises questions about how that impact is measured.

The catch. Advertisers may question whether the reported results are fully objective or skewed toward showing positive outcomes, given Google’s incentives.

What to watch. How detailed and transparent the reporting becomes — and whether advertisers see mixed or negative results alongside wins.

Bottom line. Google is moving from “trust us” to “here’s the proof,” but advertisers will be watching closely to see how impartial that proof really is.

First seen. This update was first spotted by Arpan Banerjee who shared seeing the new tab on LinkedIn.

Google Ads lets marketers reuse AI text rules across campaigns

8 April 2026 at 00:10
Google Ads tactics to drop

Google is giving advertisers more control over how AI generates ad copy, making it easier to scale campaigns without losing brand consistency.

What’s happening. Google Ads is rolling out a beta feature that allows marketers to copy text guidelines from existing campaigns and apply them to new ones, eliminating the need to rewrite brand rules from scratch.

How it works. Advertisers can replicate approved tone, style and messaging rules across campaigns in one click, ensuring AI-generated ads stay aligned with brand standards while reducing setup time.

Why we care. The feature helps teams launch campaigns faster by reusing what already works, while maintaining consistency across large accounts where multiple campaigns run simultaneously.

Between the lines. This shift reflects a growing demand from marketers to “train” AI systems rather than rely on them blindly, effectively turning brand guidelines into reusable inputs for automation.

Bottom line. AI is speeding up ad creation, but control is becoming the real differentiator — and Google is starting to hand more of it back to advertisers.

First spotted. This update was spotted by Paid Media expert Arpan Banerjee when he shared spotting the alert on LinkedIn.

Google: AI ads driving up to 80% sales lift for some brands

7 April 2026 at 23:20
What 23 tests reveal about AI Max performance in Google Ads

Google says its AI-powered advertising tools are starting to deliver meaningful results, including major revenue gains for some retailers, as it experiments with how ads work in AI-driven search.

The big picture. Fears that AI chatbots like ChatGPT would disrupt Google’s core search business haven’t materialized, and instead the company’s ads business continues to grow, suggesting AI may be expanding how people search rather than replacing it.

By the numbers:

  • Alphabet Inc. surpassed $400 billion in revenue in 2025.
  • Q4 ad revenue: $82.28 billion (+13.5% YoY).
  • YouTube ads: $11.38 billion (+~9% YoY).

What’s happened. Google is embedding ads into its AI-powered search experiences, including AI Mode powered by Gemini, while introducing new ad formats designed for conversational queries and tools that allow brands to shape how they appear in AI-generated answers, with a new “business agent” feature enabling companies like Poshmark and Reebok to control how their products are represented.

Driving the results. AI-driven campaigns like Performance Max and AI Max match ads to more detailed and conversational search intent, and Google says queries in AI Mode are often two to three times longer than traditional searches, giving the system more context to connect users with relevant products, as seen with Aritzia, which reported an 80% increase in revenue after adopting AI Max.

How it works. The system scans a retailer’s website and creative assets, interprets user intent from conversational queries, and dynamically matches products and messaging in real time. This is increasingly important given that 15% of daily searches are entirely new (according to Google) and cannot be predicted through traditional keyword targeting.

Why we care. Google is shifting from keyword-based ads to intent-driven, AI-matched advertising, meaning campaigns can reach consumers with far more precision at the moment they’re ready to buy. As search becomes more conversational and unpredictable, advertisers who rely on traditional targeting risk falling behind those using AI-driven formats that automatically adapt to new user behavior.

Zoom in. Google is testing new formats such as “direct offers,” which deliver personalized promotions when users show purchase intent, using Gemini to analyze conversational context and behavior, with brands like E.l.f. Beauty, Chewy and L’Oréal participating in early trials.

Commerce push. Google is also advancing its commerce strategy through a Universal Commerce Protocol developed with Shopify, which allows purchases to happen directly within AI conversations.

Yes, but. Google is not alone in experimenting with ads in AI search, and early results across the industry have been mixed, as Amazon has reportedly seen limited traction from ads in its AI shopping assistant, OpenAI continues to explore monetization models, and Perplexity AI has begun phasing out ads after underwhelming performance.

What they’re saying, Google positions itself as a “matchmaker” rather than a retailer, emphasizing that AI helps deliver more relevant and personalized ads while allowing brands to maintain control over their messaging and build user trust by showing the right product at the right moment.

What’s next. Gooogle says it has no current plans to introduce ads directly into Gemini but will continue testing and expanding advertising within AI Mode, including more personalized offers and AI-driven shopping experiences.

Bottom line. AI is not replacing search but reshaping it, and for Google that shift is making advertising more conversational, more targeted and, in some cases, significantly more profitable.

Dig deeper. Google says its AI-powered ads help some brands lift online sales by 80%.

Sundar Pichai sees Google Search evolving into an ‘agent manager’

7 April 2026 at 23:05
Google Search agent manager

Google Search is evolving beyond links and answers into a system that completes tasks, potentially fundamentally changing how users interact with the web. That’s according to Alphabet CEO Sundar Pichai, speaking on the Cheeky Pint podcast.

Why we care. Google is signaling a move from information retrieval to task execution.

Search becoming agentic. Traditional search behavior is already changing and will continue to, Pichai said.

  • “If I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.”

Pichai also described a future where Google Search acts less like a list of results and more like a system that coordinates actions:

  • “Search would be an agent manager in which you’re doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

AI Mode is already changing queries. Users are already adapting their behavior in Google’s AI-powered search experiences, Pichai said:

  • “But today in AI Mode in Search, people do deep research queries. That doesn’t quite fit the definition of what you’re saying. But people adapted to that. I think people will do long-running tasks.”

Search vs. Gemini overlap. Despite the rise of Gemini, Pichai said Google isn’t replacing Search with a chatbot. Instead, the two will coexist — and diverge (echoing what Liz Reid said last month):

  • “We are doing both Search and Gemini. They will overlap in certain ways. They will profoundly diverge in certain ways. I think it’s good to have both and embrace it.”

The interview. The history and future of AI at Google, with Sundar Pichai

💾

Google CEO says searches will turn into multi-step tasks, with AI coordinating actions across tools instead of returning links and answers.

Google AI Overviews: 90% accurate, yet millions of errors remain: Analysis

7 April 2026 at 22:35
Google AI Overviews accuracy

Google’s AI Overviews answered a standard factual benchmark correctly 91% of the time in February, up from 85% in October, according to a New York Times analysis with AI startup Oumi.

However, Google handles more than 5 trillion searches per year, so that means tens of millions of answers every hour may be wrong.

Why we care. We’ve watched Google shift from linking to sources to summarizing them for more than two years. This report suggests AI Overviews are improving, but still mix correct answers, weak sourcing, and clear errors in ways that can mislead searchers and reshape which publishers get visibility and clicks.

The details. Oumi tested 4,326 Google searches using SimpleQA, a widely used benchmark for measuring factual accuracy in AI systems, the Times reported. It found AI Overviews were accurate 85% of the time with Gemini 2 and 91% after an upgrade to Gemini 3.

  • The bigger problem may be sourcing. Oumi found that more than half of the correct February responses were “ungrounded,” meaning the linked sources didn’t fully support the answer.
  • That makes verification harder. The answer may be right, but the cited pages may not clearly show why.

What changed. Accuracy improved between October and February, but grounding worsened. In October, 37% of correct answers were ungrounded; in February, that rose to 56%.

Examples. The Times highlighted several misses:

  • For a query about when Bob Marley’s home became a museum, Google answered 1987; the correct year was 1986, according to the Times, and the cited sources didn’t support the claim or conflicted.
  • For a query about Yo-Yo Ma and the Classical Music Hall of Fame, Google linked to the organization’s site but still said there was no record of his induction.
  • In another case, Google gave the correct age at Dick Drago’s death but misstated his date of death.

Google’s response: Google disputed the Times analysis, saying the study used a flawed benchmark and didn’t reflect what people actually search. Google spokesperson Ned Adriance told the Times the study had “serious holes.”

  • Google also said AI Overviews use search ranking and safety systems to reduce spam and has long warned that AI responses can contain mistakes.

The report. How Accurate Are Google’s A.I. Overviews? (subscription required)

Before yesterdaySearch Engine Land

Google starts showing sponsored ads in the Images tab on mobile search

7 April 2026 at 20:56
In Google Ads automation, everything is a signal in 2026

Google has begun placing sponsored ad units directly inside the Images tab of mobile search results — a new placement that eligible campaigns can access without any changes to existing keyword targeting.

What’s happening. When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled “Sponsored” — consistent with how Google labels ads elsewhere in search results.

How it works. Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.

Why we care. This is a meaningful expansion of Google’s paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts — and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.

The big picture. Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates — more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.

What to watch. Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing — and whether it’s eating into organic image visibility for competitors.

First seen. The placement was spotted by Google Ads Expert – Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.

One in five ChatGPT clicks go to Google: Study

7 April 2026 at 20:51
Traffic funnel few winners

Over 30% of outbound clicks go to just 10 domains, with Google alone taking more than 20%, according to a new Semrush study published today.

ChatGPT also relies less on the live web, triggering search on 34.5% of queries, down from 46% in late 2024.

The big picture. ChatGPT’s growth has plateaued, and its role in how users navigate the web is evolving unevenly.

  • Referral traffic from ChatGPT grew 206%, comparing January 2025 to January 2026.

The details. Most ChatGPT referral traffic still goes to a small set of sites, even as more sites receive some traffic.

  • Google accounts for 21.6% of all ChatGPT referral traffic.
  • The next nine domains bring the top 10 to just over 30% of referrals.
  • Most other sites get a long tail of minimal traffic.
  • The number of domains receiving referrals expanded, peaking at around 260,000 in 2025 before settling near 170,000.

Why we care. Visibility in ChatGPT doesn’t translate evenly into traffic, and you’ll likely see marginal referral impact. The decline in search-triggered queries also limits your chances to earn citations and traffic.

When ChatGPT searches. It defaults to pre-trained knowledge and uses web search in specific cases, including:

  • User requests for sources.
  • Questions about recent events.
  • Situations where the model lacks confidence.

Behavior shift. Most ChatGPT prompts still don’t resemble traditional search queries.

  • Between 65% and 85% of prompts don’t match standard keywords, reflecting more complex, conversational inputs.
  • Meanwhile, engagement is deepening. Queries per session jumped 50% in late 2025.

About the data. Semrush analyzed more than 1 billion lines of U.S. clickstream data from October 2024 to February 2026 across a 200 million-user panel, tracking prompts, referral destinations, and search usage.

The study. ChatGPT traffic analysis: Insights from 17 months of clickstream data

New Google Maps features: Local Guides redesign, AI captions, photo sharing

7 April 2026 at 19:30
Google Maps AI updates

Google is rolling out new Google Maps features that make it easier to contribute photos, reviews, and local insights, while adding Gemini-powered caption suggestions.

Local Guides redesign. Contributor profiles are getting more visibility. Total points now appear more prominently, Local Guide levels are easier to spot, and badge designs have been refreshed.

  • Top contributors will also stand out more in reviews with new gold profile indicators.

AI caption drafts. Google is also introducing AI-generated caption drafts. Gemini analyzes selected images and suggests text you can edit or discard.

  • Caption suggestions are available in English on iOS in the U.S., with Android and broader global expansion planned.

Media sharing. Google Maps now shows recent photos and videos directly in the Contribute tab, making uploads faster.

  • If you enable media access, Google Maps will suggest images from your camera roll that are ready to post with a tap.
  • This feature is now live globally on iOS and Android.

Why we care. Google is making it easier to create and scale fresh local content, which can directly affect rankings and visibility. At the same time, stronger contributor signals may influence which reviews users trust and which businesses win clicks.

💾

New Maps tools surface recent media, suggest camera roll uploads, and flag top reviewers with gold profiles, as Google expands AI captions.

How AI decides what your content means and why it gets you wrong

7 April 2026 at 19:00
How AI decides what your content means and why it gets you wrong

Google once attributed two of Barry Schwartz’s Search Engine Land articles to me — a misclassification at the annotation layer that briefly rewrote authorship in Google’s systems.

For a few days, when you searched for certain Search Engine Land articles Schwartz had written, Google listed me as the author. The articles appeared in my entity’s publication list and were connected to my Knowledge Panel.

What happened illustrates something the SEO industry has almost entirely overlooked: that annotation — not the content itself — is the key to what users see and thus your success.

How Google annotated the page and got the author wrong

Googlebot crawled those pages, found my name prominently displayed below the article (my author bio appeared as the first recognized entity name beneath the content), and the algorithm at the annotation gate added the “Post-It” that classified me as the author with high confidence.

This is the most important point to bear in mind: the bot can misclassify and annotate, and that defines everything the algorithms do downstream (in recruitment, grounding, display, and won). In this case, the issue was authorship, which isn’t going to kill my business or Schwartz’s.

But if that were a product, a price, an attribute, or anything else that matters to the intent of a user search query where your brand should be one of the obvious candidates, when any aspect of content is inaccurately annotated, you’ve lost the “ranking game” before you even started competing.

Annotation is the single most important gate in taking your brand from discover to won, whatever query, intent, or engine you’re optimizing for.

What annotation is and why it isn’t indexing

Indexing (Gate 4) breaks your content into semantic chunks, converts it, and stores it in a proprietary format. Annotation (Gate 5) then labels those chunks with a confidence-driven “Post-It” classification system.

It’s a pragmatic labeler and attaches classifications to each chunk, describing:

  • What that chunk contains factually.
  • In what circumstances it might be useful.
  • The trustworthiness of the information.

Importantly, it’s mostly unopinionated when labeling facts, context, and trustworthiness. Microsoft’s Fabrice Canel confirmed the principle that the bot tags without judging, and that filtering happens at query time.

What does that mean? The bot annotates neutrally at crawl time, classifying your content without knowing what query will eventually trigger retrieval. 

Annotation carries no intent at all. It’s the insight that has completely changed my approach to “crawl and index.”

That clearly shows you that indexing isn’t the ultimate goal. Getting your page indexed is table stakes. Full, correct, and confident annotation is where the action happens: an indexed page that is poorly annotated is invisible to each of the algorithmic trinity.

The annotation system analyzes each chunk using one or more language models, cross-referenced against the web index, the knowledge graph, and the models’ own parametric knowledge. But it analyzes each chunk in the context of the page wrapper

The page-level topic, entity associations, and intent provide the frame for classifying each chunk. If the page-level understanding is confused (unclear topic, ambiguous entity, mixed intent), every chunk annotation inherits that confusion. Even more importantly, it assigns confidence to every piece of information it adds to the “Post-Its.”

The choices happen downstream: each of the algorithmic trinity (LLMs, search engines, and knowledge graphs) uses the annotation to decide whether to absorb your content at recruitment (Gate 6). Each has different criteria, so you need to assess your own content for its “annotatability” in the context of all three.

And a small but telling detail: Back in 2020, Martin Splitt suggested that Google compares your meta description to its own LLM-generated summary of the page. When they match, the system’s confidence in its page-level understanding increases, and that confidence cascades into better annotation scores for every chunk — one of thousands of tiny signals that accumulate.

Annotation is the key midpoint of the 10-gate pipeline, where the scoreboard turns on. Everything before it is infrastructure: “Can the system access and store your content?” Everything after it is competition:

Annotation is where you simply cannot afford to fail

When you consider what happens at the annotation gate and its depth, links and keywords become the wrong lens entirely. They describe how you tried to influence a ranking system, whereas annotation is the mechanism behind how the algorithmic trinity chooses the content that builds its understanding of what you are.

The frame has to shift. You’re educating algorithms. They behave like children, learning from what you consistently, clearly, and coherently put in front of them. With consistent, corroborated information, they build an accurate understanding.

Given inconsistent or ambiguous signals, they learn incorrectly and then confidently repeat those errors over time. Building confidence in the machine’s understanding of you is the most important variable in this work, whether you call it SEO or AAO.

Confiance
“Confiance” (confidence) is the signal that drives how systems understand content. Slide from my SEOCamp Lyon 2017 presentation.

In 2026, every AI assistive engine and agent is that same child, operating at a greater scale and with higher stakes than Google ever had. Educating the algorithms isn’t a metaphor. It’s the operational model for everything that follows.

For a more academic perspective, see: “Annotation Cascading: Hierarchical Model Routing, Topical Authority, and Inter-Page Context Propagation in Large-Scale Web Content Classification.”

5 levels of annotation: 24+ dimensions classifying your content at Gate 5

When mapping the annotation dimensions, I identified 24, organized across five functional categories. After presenting this to Canel, his response was: “Oh, there is definitely more.”

Of course there are. This taxonomy is built through observation first, then naming what consistently appears. The [know/guess] distinctions follow the same logic: test hypotheses, eliminate what doesn’t hold up, and keep what remains.

The five functional categories form the foundation of the model. They are simple by design — once you understand the categories, the dimensions follow naturally. There are likely additional dimensions beyond those mapped here.

What follows is the taxonomy: the categories are directionally sound (as confirmed by Canel), while the specific dimension assignments reflect observed behavior and remain incomplete.

Level 1: Gatekeepers (eliminate)

  • Temporal scope, geographic scope, language, and entity resolution. Binary: pass or fail. 
  • If your content fails a gatekeeper (wrong language, wrong geography, or ambiguous entity), it is eliminated from that query’s candidate pool instantly. The other dimensions don’t come into play.

Level 2: Core identity (define)

  • Entities, attributes, relationships, sentiment. 
  • This is where the system decides what your content means:
    • Who is being discussed.
    • What facts are stated.
    • How entities relate.
    • What the tone is. 
  • Without clear core identity annotations, a chunk carries no semantic weight in any downstream gate.

Level 3: Selection filters (route) 

  • Intent category, expertise level, claim structure, and actionability. 
  • These determine which competition pool your content enters.
    • Is this informational or transactional? 
    • Beginner or expert? 
  • Wrong pool placement means competing against content that is a better match for the query, and you’ve lost before recruitment or ranking begins.

Level 4: Confidence multipliers (rank)

  • Verifiability, provenance, corroboration count, specificity, evidence type, controversy level, and consensus alignment. These scale your ranking within the pool. 
  • This is where validated, corroborated, and specific content outranks accurate but unvalidated content. 
  • The multipliers explain why a well-sourced third-party article about you often outperforms your own claims: provenance and corroboration scores are higher.
  • Confidence has a multiplier effect on everything else and is the most powerful of all signals. Full stop.

Level 5: Extraction quality (deploy)

  • Sufficiency, dependency, standalone score, entity salience, and entity role. These determine how your content appears in the final output. 
  • Is this chunk a complete answer, or does it need context? Is your entity the subject, the authority cited, or a passing mention? 
  • Extraction quality determines whether AI quotes you, summarizes you, or ignores you.
Five levels of annotation

Across all five levels, a confidence score is attached to every individual annotation. Not just what the system thinks your content means, but how certain it is.

Clarity drives confidence. Ambiguity kills it.

Canel also confirmed additional dimensions I had not initially mapped: audience suitability, ingestion fidelity, and freshness delta. These sit across the existing categories rather than forming a sixth level.

In 2022, Splitt named three annotation behaviors in a Duda webinar that map directly onto the five-level model. The centerpiece annotation is Level 2 in direct operation: 

  • “We have a thing called the centerpiece annotation,” Splitt confirmed, a classification that identifies which content on the page is the primary subject and routes everything else — supplementary, peripheral, and boilerplate — relative to it. 
  • “There’s a few other annotations” of this type, he noted. 

Annotation runs before recruitment, which means a chunk classified as non-centerpiece carries that verdict into every gate that follows. Boilerplate detection is Level 3: content that appears consistently across pages — headers, footers, navigation, and repeated blocks — enters a different competition pool based on its structural role alone. 

  • “We figure out what looks like boilerplate and then that gets weighted differently,” Splitt said 

Off-topic routing closes the picture. A page classified around a primary topic annotates every chunk relative to that centerpiece, and content peripheral to the primary topic starts its own competition pool at a disadvantage before Recruitment begins. 

Splitt’s example: a page with 10,000 words on dog food and a thousand on bikes is “probably not good content for bikes.” The system isn’t ignoring the bike content. It’s annotating it as peripheral, and that annotation is the routing decision.

Get the newsletter search marketers rely on.


The multiplicative destruction effect: When one near-zero kills everything

In Sydney in 2019, I was at a conference with Gary Illyes and Brent Payne. Illyes explained that Google’s quality assessment across annotation dimensions was multiplicative, not additive. 

Illyes asked us not to film, so I grabbed a beer mat and noted a simple calculation: if you score 0.9 across each of 10 dimensions, 0.9 to the power of 10 is 0.35. You survive at 35% of your original signal. If you score 0.8 across 10 dimensions, you survive at 11%. If one dimension scores close to zero, the multiplication produces a result close to zero, regardless of how well you score on every other dimension.

Payne’s phrasing of the practical implication was better than mine: “Better to be a straight C student than three As and an F.”

The beer mat went into my bag. The principle became central to everything I’ve built since.

The multiplicative destruction effect

The multiplicative destruction effect has a direct consequence for annotation strategy: the C-student principle is your guide. 

  • A brand with consistently adequate signals across all 24+ dimensions outperforms a brand with brilliant signals on most dimensions and a near-zero on one. The near-zero cascades. 
  • A gatekeeper failure (Level 1) eliminates the content entirely. 
  • A core identity failure (Level 2) misclassifies it so badly that high confidence multipliers at Level 4 are applied to the wrong entity. 
  • An extraction quality failure (Level 5) produces a chunk that the system can retrieve but can’t deploy usefully. The failure doesn’t have to be dramatic to be fatal.

At the annotation stage, misclassification, low confidence, or near-zero on one dimension will kill your content and take it out of the race.

Nathan Chalmers, who works at Bing on quality, told me something that puts this in a different light entirely. Bing’s internal quality algorithm, the one making these multiplicative assessments across annotation dimensions, is literally called Darwin

Natural selection is the explicit model: content with near-zero on any fitness dimension is selected against. The annotations are the fitness test. The multiplicative destruction effect is the selection mechanism.

How annotation routes content to specialist language models

The system doesn’t use one giant language model to classify all content. It routes content to specialized small language models (SLMs): domain-specific models that are cheaper, faster, and paradoxically more accurate than general LLMs for niche content. 

A medical SLM classifies medical content better than GPT-4 would, because it has been trained specifically on medical literature and knows the entities, the relationships, the standard claims, and the red flags in that domain.

What follows is my model of how the routing works, reconstructed from observable behavior and confirmed principles. The existence of specialist models is confirmed. The specific cascade mechanism is my reconstruction.

The routing follows what I call the annotation cascade. The choice of SLM cascades like this:

  • Site level (What kind of site is this?)
  • Refined by category level (What section?)
  • Refined by page level (what specific topic?)
  • Applied at chunk level (What does this paragraph claim?)

Each level narrows the SLM selection, and each level either confirms or overrides the routing from above. This maps directly to the wrapper hierarchy from the fourth piece: the site wrapper, category wrapper, and page wrapper each provide context that influences which specialist model the system selects.

How annotation routes content to SLMs

The system deploys three types of SLM simultaneously for each topic. This is my model, derived from the behavior I have observed: annotation errors cluster into patterns that suggest three distinct classification axes. 

  • The subject SLM classifies by subject matter — what is this about? — routing content into the right topical domain. 
  • The entity SLM resolves entities and assesses centrality and authority: who are the key players, is this entity the subject, an authority cited, or a passing mention? 
  • The concept SLM maps claims to established concepts and evaluates novelty, checking whether what the content asserts aligns with consensus or contradicts it.

When all three return high confidence on the same entity for the same content, annotation cost is minimal, and the confidence score is very high. When they disagree (i.e., the subject SLM says “marketing,” but the entity SLM can’t resolve the entity, and the concept SLM flags the claims as novel), confidence drops, and the system falls back to a more general, less accurate model.

The key insight? LLM annotation is the failure mode. The system wants to use a specialist. It defaults to a generalist only when it can’t route to a specialist. Generalist annotation produces lower confidence across all dimensions. 

The practical implication 

Content that’s category-clear within its first 100 words, uses standard industry terminology, follows structural conventions for its content type, and references well-known entities in its domain triggers SLM routing. 

Content that’s topically ambiguous or terminologically creative gets the generalist. Lower confidence propagates through every downstream gate.

Now, this may not be the exact way the SLMs are applied as a triad (and it might not even be a trio). However, two things strike me:

  • Observed outputs act that way.
  • If it doesn’t function this way, it would be.

First-impression persistence: Why the initial annotation is the hardest to correct

Here is something I’ve observed over years of tracking annotation behavior. It aligns with a principle Canel confirmed explicitly for URL status changes (404s and 301 redirects): the system’s initial classification tends to stick.

When the bot first crawls a page, it selects an SLM, runs the annotation, assigns confidence scores, and saves the classification. The next time it crawls the same page, it logically starts with the previously assigned model and annotations. I call this first-impression persistence. 

The initial annotation is the baseline against which all subsequent signals are measured. The system doesn’t re-evaluate from scratch. It checks whether the new crawl is consistent with the existing classification, and if it is, the classification is reinforced.

Canel confirmed a related mechanism: when a URL returns a 404 or is redirected with a 301, the system allows a grace period (very roughly a week for a page, and between one and three months for content, in my observation) during which it assumes the change might revert. After the grace period, the new state becomes persistent. I believe the same principle applies to content classification: a window of fluidity after first publication, then crystallization.

I have direct evidence for the correction side from the evolution of my own terminologies. When I first described the algorithmic trinity, I used the phrase “knowledge graphs, large language models, and web index.” Google, ChatGPT, and Perplexity all picked up on the new term and defined it correctly.

A month later, I changed the last one to “search engine” because it occurred to me that the web index is what all three systems feed off, not just the search system itself. At the point of correction, I had published roughly 10 articles using the original terminology. 

I went back and invested the time to change every single one, updating every reference, leaving zero traces. A month later, AI assistive engines were consistently using “search engine” in place of “web index.”

The lesson is that change is possible, but you need to be thorough: any residual contradictory signal (one old article, one unchanged social post, and one cached version) maintains inertia proportionally. Thoroughness is the unlock, rather than time.

First-impression persistence

A rebrand, career pivot, or repositioning is the practical example. You can change the AI model’s understanding and representation of your corporate or personal brand, but it requires thoroughly and consistently pivoting your digital footprint to the new reality.

In my experience, “on a sixpence” within a week. I’ve done this with my podcast several times. Facebook achieved the ultimate rebrand from an algorithmic perspective when it changed its name to Meta.

The practical implication

Get your annotation right before you publish. The first crawl sets the baseline. A page published prematurely (with an unclear topic or ambiguous entity signals) crystallizes into a low-confidence annotation, and changing it later requires significantly more effort than getting it right the first time.

Annotation-time grounding: The bot cross-references three sources while classifying your content

The system doesn’t annotate in a vacuum. When the bot classifies your content at Gate 5, it cross-references against at least three sources simultaneously. This is my model of the mechanism. The observable effect — that annotation confidence correlates with entity presence across multiple systems — is confirmed from our tracking data.

The bot carries prioritized access to the web index during crawling, checking your content against what it already knows: 

  • Who links to you.
  • What context those links provide.
  • How your claims relate to claims on other pages. 

Against the knowledge graph, it checks annotated entities during classification — an entity already in the graph with high confidence means annotation inherits that confidence, while absence starts from a much lower baseline. 

The SLM’s own parametric knowledge provides the third cross-reference: each SLM compares encountered claims against its training data, granting higher confidence to claims that align, flagging contradictions, and giving lower confidence to novel claims until corroboration accumulates.

This means annotation quality isn’t just about how well your content is written. It’s about how well your entity is already represented across all three of the algorithmic trinity. An entity with strong knowledge graph presence, authoritative web index links, and consistent SLM-domain representation gets higher annotation confidence on new content automatically. 

The flywheel: better presence leads to better annotation, which leads to better recruitment, which strengthens presence, and which improves future annotation.

Once again, better to have an average presence in all three than to have a dominant presence in two and no presence in one.

The annotation flywheel

And this is why knowledge graph optimization (what I’ve been advocating for over a decade) isn’t separate from content optimization. They are the same pipeline. Your knowledge graph presence directly improves how accurately, verbosely, and confidently the system annotates every new piece of content you publish.

If you’re thinking “Knowledge graph? That’s just Google,” think again.

In November 2025, Andrea Volpini intercepted ChatGPT’s internal data streams and found an operational entity layer running beneath every conversation: structured entity resolution connected to what amounts to a product graph mirroring Google Shopping feeds. 

OpenAI is building its own knowledge graph inside the LLM. My bet is that they will externalize it for several reasons: a knowledge graph in an LLM doesn’t scale, an LLM will self-confirm, so the value is limited, a standalone knowledge graph can be easily updated in real time without retraining the model, and it’s only useful at scale when it stays current.

The algorithmic trinity isn’t a Google phenomenon. It’s the architectural pattern every AI assistive engine and agent converges on, because you can’t generate reliable recommendations without a concept graph, structured entity data, and up-to-date search results to ground them.

Why Google and Bing annotate differently from engines that rent their index

Google and Bing own their crawling infrastructure, indexes, and knowledge graphs. They can afford grace periods, schedule rechecks, and maintain temporal state for URLs and entities over months.

OpenAI, Perplexity, and every engine that rents index access from Google or Bing operate on a fundamentally different model. They have two speeds: 

  • A slow Boolean gate (Does this content exist in the index I have access to?)
  • A fast display layer (What does the content say right now when I fetch it for grounding?)

The Boolean gate inherits Google’s and Bing’s annotations. Whether your content appears at all depends on whether it was recruited from the index those engines draw from, and that recruitment depends on annotation and selection decisions made by the algorithmic trinity. But what these engines show when they cite you is fetched in real time.

The practical implication

For Google and Bing, you’re optimizing for annotation quality with the benefit of grace periods and gradual reclassification. For engines that don’t own their index, the Boolean presence is inherited from the rented index and is slow to change, but the surface-level display changes every time they re-fetch.

That means what you are seeing in the results is not a direct measure of your annotation quality. It’s a snapshot of your page at the moment of fetch, and those two things may have nothing to do with each other.

How to optimize for annotation quality: The six practical principles

The SEO industry has spent two decades optimizing for search and assistive results — what happens after the system has already decided what your content means. We should be optimizing for annotation. 

If the annotation is wrong, everything downstream suffers. When the annotation is accurate, verbose, and confident, your content has a significant advantage in recruitment, grounding, display, and, ultimately, won.

1. Trigger SLM routing

Make your topic category obvious within the first 100 words. Use standard industry terminology. Follow structural conventions. Reference well-known entities. The goal: specialist model, not generalist.

2. Write for all three SLMs

Clear signals for subject (what is this about?), entity (who is the authority?), and concept (what established ideas does this connect to?). Ambiguity on any axis reduces confidence.

3. Get it right before publishing

First-impression persistence means the initial annotation is the hardest to change. Publish only when topic, entity signals, and claims are unambiguous.

4. Build the flywheel

Knowledge graph presence, web index centrality, LLM parameter strengthening, and correct SLM-domain representation all feed annotation confidence for new content. Invest in entity foundation, and every future piece benefits from inherited credibility.

5. Eliminate noise when correcting

Change every reference. Leave zero contradictory signals. Noise maintains inertia proportionally.

6. Audit for annotation, not just indexing

A page can be indexed and still misannotated. If the AI response is wrong about you, the problem is almost certainly at Gate 5, not Gate 8.

How to optimize for annotation quality

Annotation is the gate where most brands silently lose. The SEO industry doesn’t yet have a vocabulary for it. That needs to change, because the gap between brands that get annotation right and brands that don’t is the gap between consistent AI visibility and permanent algorithmic obscurity.

Why annotation matters so much and why it should be your main focus

You’ve done everything within your power to create the best possible content that maps to intent of your ideal customer profile, you have methodically optimized your digital footprint, your data feeds every entry mode simultaneously: pull, push discovery, push data, MCP, and ambient, so they are all drawing from the same clean, consistent source

So, content about your brand has passed through the DSCRI infrastructure phase, survived the rendering and conversion fidelity boundaries, and arrived in the index (Gate 4) intact. Phew!

Now it gets classified. Annotation is the last moment in the pipeline where you have the field to yourself. Every decision in DSCRI was absolute: you vs. the machine, with no competitor in the frame. 

Annotation is still absolute. The system classifies your content based on your signals alone, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated.

But this is the last time you aren’t competing. From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in the competitive race you have to win.

That means: 

  • Get annotation right, and you start ahead, with confidence that compounds through every downstream gate in RGDW. 
  • Get it wrong, and the multiplicative destruction effect does its work — a near-zero on one annotation dimension cascades through recruitment, grounding, display, and won. No amount of excellent content, structural signals, or entry-mode advantage recovers it.

Warning: First-impression persistence (remember, the first time you are annotated is the baseline) means you don’t get a clean retry. Changing the baseline requires thoroughness, time, and more effort than getting it right on the first crawl.

Annotation isn’t the gate that most brands focus on. It’s the gate where most brands silently lose.

This is the eighth piece in my AI authority series. 

5 priorities for lead gen in AI-driven advertising

7 April 2026 at 18:00
5 priorities for lead gen in AI-driven advertising

Many of today’s PPC tools were designed to be easily accessible to ecommerce. That doesn’t mean lead gen can’t take advantage of them, but it does mean more intentional application is required.

Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply — but not always in the same way.

Here are the priorities that matter most for succeeding with lead gen using AI.

Disclosure: I’m a Microsoft employee. While this guidance is platform-agnostic, I’ll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.

1. Fix your conversion data first

This is the single most important thing you can do as AI becomes more embedded in media buying.

Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, it’s reasonable to ask whether your data is still telling an accurate story.

Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.

In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:

  • Confirm conversions are firing consistently.
  • Regularly review conversion goal diagnostics.
  • Validate that lead status updates and downstream signals are actually flowing back.

If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.

Dig deeper: How to make automation work for lead gen PPC

2. Make landing pages easy to ingest and easy to understand

Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.

Your landing pages should make it clear:

  • What action you want the user to take.
  • What happens after action is taken.
  • Which conversions matter most.

Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.

Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.

A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.

You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, you’re in a good place. If it doesn’t, that’s a signal to refine your content.

Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

3. Budget across the entire funnel

Lead gen has always struggled with long conversion cycles. That challenge doesn’t go away, and in some ways, it becomes more pronounced.

AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.

That means:

  • Budgeting intentionally across awareness, consideration, and conversion.
  • Applying the right metrics at each stage.
  • Looking beyond traffic as the primary success indicator.

In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Get the newsletter search marketers rely on.


4. Clean up your feeds and map data

You may not think you have a “feed” in your lead gen setup, but that absence can put you at a disadvantage.

Feeds help AI systems understand your business structure, services, and site architecture. Even if you don’t have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.

Clean up your feeds and map data
Example of a feed for lead gen

Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.

On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.

Account for potential AI-driven inflation in reporting, whether you’re looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.

5. Pressure-test your creative for clarity

Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.

If your value proposition requires three headlines, or a headline plus a description, to make sense, that’s a risk.

Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:

  • What you do
  • Who you help
  • Why it matters

If that clarity isn’t there, AI-driven placements can quickly become confusing.

Dig deeper: Why creative, not bidding, is limiting PPC performance

The fundamentals that still move the needle

Lead gen today doesn’t need to be complicated.

Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.

The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.

If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business — and that’s where sustainable performance comes from.

❌
❌