Reading view

Why vibe coding is becoming an SEO advantage

Why vibe coding is becoming an SEO advantage

SEO used to be constrained by one thing more than anything else: dependency.

Dependency on developers, roadmaps, and “maybe next quarter.”

If you wanted a new page template, a calculator, a comparison widget, or even a simple interactive component, you had to ask, wait, and compromise. That’s changing fast.

If you’re in SEO or GEO today and you’re not learning how to vibe code, you’re limiting your impact.

Vibe coding changed the power dynamics in SEO

A few years ago, building tools like calculators or interactive widgets meant tickets, specs, and dev cycles.

Today, with AI, I’ve personally built dozens of mini apps, tools, and UI components without involving a single developer.

Some of those tools are small. Some are relatively ugly but effective. Some now bring in thousands of organic sessions per month.

Entire pages built around a vibe-coded tool are now outperforming traditional text-heavy competitors.

Parents Hub "Back To School Countdown" Vibe-Coded Tool
Parents Hub “Back To School Countdown” Vibe-Coded Tool

Even more importantly, I’ve introduced this mindset to my SEO team, and they’re now building tools on their own to achieve our search goals. That alone changes everything.

SEO teams can now move faster, test ideas immediately, and reserve developers for actual engineering work, including new templates, infrastructure, and scaling.

And yes, there’s something genuinely satisfying about building a tool yourself, publishing it, and watching it attract traffic month after month.

You don’t need to build fancy things. Just things that get the job done.

Dig deeper: Inspiring examples of responsible and realistic vibe coding for SEO

Stop talking about user personas. Start talking to them.

Everyone agrees on the user persona theory:

  • Identify user personas.
  • Understand their pain points.
  • Create content that addresses them.

What almost no one explains is how to actually present that information.

Historically, SEO handled personas with text:

  • “If you’re a parent…” 
  • “For families…” 
  • “Business travelers should consider…”

That approach is already outdated. Today, we can let users self-identify and surface only the information that matters to them.

One example from a brand I manage:

  • A vibe-coded tabbed component.
  • Each tab represents a different user persona.
  • Clicking a tab reveals persona-specific content.

For airport transfers in Majorca, a “family” persona doesn’t care about the same things as a solo traveler.

Example case of the "User Persona" component
Example case of the “User Persona” component

They care about vehicle safety, child seats, family-friendly routes, vehicle size, and indicative pricing. That content appears only when the Family tab is selected.

From an SEO and GEO standpoint, persona pain points were sourced directly from Google Search Console and query fan-out analysis.

The component was then vibe-coded and placed where intent needed to be satisfied immediately.

This aligns with how AI platforms already structure answers: segmented, persona-aware, and intent-first.

Entire traffic categories can be built on tools alone

On one personal project, we launched a brand-new Tools category — ten pages with simple tools, such as:

  • Calculators.
  • Checklists.
  • Calendars.
  • Countdown timers.
  • AI generators.

Each page leads with the tool and uses supporting components to answer sub-intents.

The result? More than 5,000 incremental clicks in two months. Most of those pages were also out of season.

Dig deeper: How to vibe-code an SEO tool without losing control of your LLM

UI is now a ranking lever

SEOs have never been more capable. The only real limitation left is creativity.

One of the most underrated SEO advantages today is how information is visually presented.

Text is cheap. Everyone can produce it. UI that answers intent instantly isn’t.

I’ve seen:

  • Two calculator pages add 10,000 monthly organic sessions.
  • One tool page rank in the top three within days for a high-volume government query.
  • Multiple seasonal pages rank off-season purely because the UI was better.

When competitors list information, we let users interact with it.

  • Eligibility calculators. 
  • Countdown timers. 
  • Dynamic tables. 
  • Visual comparisons.

These pages still include text. But the text supports the tool, not the other way around.

Get the newsletter search marketers rely on.


‘SEO takes time’ — except when it doesn’t

One page we published targeted a Greek government school financial support program with a high-volume head term, dozens of long-tail queries, and extremely text-heavy competition.

We built:

  • A financial support eligibility tool.
  • A transparent explanation of the algorithm logic behind the tool for E-E-A-T.
  • Common rejection mistakes parents made when applying for support.
  • Historical program changes.
  • A step-by-step application flow.
Parents Hub Kindergarten Financial Support Eligibility Calculator
Parents Hub Kindergarten Financial Support Eligibility Calculator

We tagged the tool as a WebApplication, implemented HowTo schema for the process, and properly marked up the FAQs.

Three days after publishing, the page was already ranking on the first page for the main term and generating about 100 clicks.

Sometimes SEO really doesn’t take that long if you solve the problem better than anyone else.

Tools are the ultimate SEO and PR assets

Some tools are built purely for traffic. Others are designed to become linkable digital assets.

A pregnancy due date calculator, a baby name generator, or a comparison table based on TripAdvisor data isn’t just a page. It’s a potential PR campaign.

When a digital asset solves a real pain point, looks modern, answers intent better than SERP features, and has clear PR angles, that’s where SEO, PR, and branding start to collide. That’s when things get really interesting.

Dig deeper: How vibe coding is changing search marketing workflows

Finding tool-page opportunities is easier than ever

With MCP servers from SEO tools, you can now surface tool ideas directly from search demand without leaving the chat, assess difficulty instantly, and launch faster than ever.

I’ve built and launched multiple tool pages this way, and the speed difference compared with traditional workflows is massive.

We’re entering a period where ideation, validation, and execution can all happen in days, not months.

The big shift

SEO is no longer about who can write the longest article, rephrase the same information better, or game templates. It’s about who answers intent fastest, removes friction, and builds search experiences instead of documents.

Vibe coding changed who gets to build. And right now, the people embracing it are pulling away fast. If you want to win in modern SEO and GEO, build tools, build components, and build search experiences. Text alone isn’t enough anymore. And honestly, that’s a very good thing.

Dig deeper: Build your own AI search visibility tracker for under $100/month

Google Ads to auto-link YouTube channels starting June 10

Google is set to automatically link Google Ads accounts with associated YouTube channels — according to communications sent to multiple advertisers — tightening the connection between video engagement and ad performance.

What’s happening. Advertisers have received notices that, from June 10, 2026, Google Ads accounts that aren’t already linked to a YouTube channel will be automatically connected.

The update removes the need for manual linking and ensures advertisers can access video engagement data and targeting features by default.

Why we care. Linking a YouTube channel unlocks deeper insights and more advanced targeting options — something many advertisers either overlook or delay setting up.

By automating the process, Google is effectively making video data a standard part of campaign optimisation.

Zoom in. Once linked, advertisers can access organic video metrics, including view counts, directly within Google Ads.

They can also build audience segments based on how users interact with their YouTube content — from video views to channel engagement.

What else. The integration allows advertisers to track “earned actions,” such as subscriptions or additional views driven by ads, and use those engagements as conversion signals.

That creates a clearer picture of how video campaigns influence user behaviour beyond just clicks.

What to watch. How advertisers adapt their measurement strategies once organic and paid video data are combined, and whether this leads to broader use of engagement-based conversion tracking in campaigns.

Bottom line. Google is making YouTube data harder to ignore — turning automatic linking into a default step for better targeting, measurement and performance.

First spotted. Several advertiser reported getting the comms from Google, including Founder of JXT Group, Menachem Ani, founder of PPC News Feed Hana Kobzová, and PPC Specialist Arpan Banerjee.

Adthena launches ChatGPT ads intelligence platform

ChatGPT growth

Adthena is bringing competitive visibility to ChatGPT ads — launching a new platform designed to track how brands show up across prompts, placements and competitors.

What’s happening. Adthena has unveiled its ChatGPT Intelligence Platform, positioning it as the first tool to offer whole-market visibility into ChatGPT Ads — similar to what it already provides for Google Ads.

The platform monitors more than 300,000 daily prompts, tracking which brands are advertising, where ads appear, and what messaging they use.

Why we care. ChatGPT’s native ads tools currently show advertisers a limited, self-focused view of performance.

Adthena is stepping in to fill that gap — giving advertisers insight into competitors, share of voice and prompt-level activity in a channel that’s still largely opaque.

Zoom in. The platform offers a full view of how ads appear across ChatGPT conversations, alongside competitive intelligence on who is bidding, where and with what creative.

It also includes real-time recommendations to optimise campaigns, helping advertisers act on insights rather than just observe them.

What else. Advertisers can analyse ad copy performance, monitor brand presence and track share of voice — all within a single dashboard that combines ChatGPT and Google Ads data.

That cross-channel view is designed to help teams make smarter budget decisions as search behaviour shifts.

Context. The launch follows Adthena’s earlier AdBridge tool, which helps advertisers migrate Google Ads campaigns into ChatGPT’s Ads Manager.

Together, the tools signal a growing ecosystem forming around AI-driven search advertising.

What they’re saying. CMO Ashley Fletcher said early adopters will shape the competitive landscape — and that the new platform “tells you exactly what to do about it.”

What to watch. Expect to see more third-party tools emerge as advertisers demand better visibility into AI-driven ad environments. Adoption will likely depend on how quickly brands start treating ChatGPT Ads as a core performance channel, while pressure may build on platforms like ChatGPT to improve their own native reporting capabilities.

Bottom line. Adthena is positioning itself as the visibility layer for ChatGPT Ads — giving advertisers a clearer view of a fast-growing but still opaque channel.

Google rolls out Merchant Center for Agencies globally

Why Google Ads auctions now run on intent, not keywords

Google is expanding Merchant Center for Agencies worldwide, giving agencies a centralized way to manage product data, diagnose issues and spot growth opportunities across multiple clients.

What’s happening. After launching in the U.S. and Canada, Merchant Center for Agencies is now rolling out globally to all agency users.

The tool is designed to help agencies manage merchant accounts at scale as product data becomes more critical to performance across shopping and discovery experiences.

Why we care. Managing product feeds across multiple clients has long been fragmented and time-consuming.

This update brings those workflows into one place — helping agencies monitor account health, fix issues faster and optimize product data more efficiently.

Zoom in. The platform introduces a unified dashboard that gives agencies a bird’s-eye view of all client accounts, including onboarding status and critical alerts.

Portfolio-wide diagnostics allow teams to quickly identify issues across accounts, filter by market or campaign type, and prioritise fixes based on potential impact.

What else. Agencies can also monitor store quality metrics and inventory health, including out-of-stock products, while managing promotions directly within the platform.

On the performance side, new insights help identify high-potential products with low visibility — which can then be tagged and prioritised in ad campaigns.

What to watch:

  • How agencies integrate this into existing workflows
  • Whether this reduces reliance on third-party feed management tools
  • If more advanced optimisation features follow

Bottom line. Google is giving agencies a more scalable way to manage product data — turning Merchant Center into a more strategic performance tool, not just a feed repository.

Google may be about to widen the SEO playing field

SEO has always been a fight for the first page of Google. Every toolchain, audit, and content brief assumes that Google’s ranking systems evaluate a relatively fixed set of roughly 20 to 30 candidate pages before final rankings are determined.

Google has kept that set small because evaluating more pages is computationally expensive.

Google’s VP of Search acknowledged the constraint in federal court. The company’s CEO later confirmed the hardware bottleneck behind it. Google’s research division has now published a technique designed to reduce those costs.

If the candidate set widens, the rules of the last decade stop working.

Why the ranking window is 20 to 30 results wide

Here’s the exchange that matters from Day 24 of United States v. Google in October 2023. DOJ counsel Kenneth Dintzer cross-examining Pandu Nayak, Google vice president of Search, from transcript page 6431:

Q: RankBrain looks at the top 20 or 30 documents and may adjust their initial score. Is that right?
A: That is correct.

Q: And RankBrain is an expensive process to run?
A: It’s certainly more expensive than some of our other ranking components.

Q: So that’s, in part, one of the reasons why you just wait until you’re down to the final 20 or 30 before you run RankBrain?
A: That is correct.

Q: RankBrain is too expensive to run on hundreds or thousands of results?
A: That is correct.

Four consecutive confirmations. The deep-learning component of Google ranking that SEOs have built a decade of theory around is deliberately withheld from the bulk of the index because Google can’t afford to apply it more broadly.

The architecture feeding that reranking window is equally revealing. Earlier in the same testimony, at transcript page 6406, Nayak described classical postings-list retrieval to Judge Mehta: 

  • “[T]he core of the retrieval mechanism is looking at the words in the query, walking down the list, it’s called the postings list… [Y]ou can’t walk the lists all the way to the end because it will be too long.” 

The corpus gets culled to “tens of thousands” of pages before ranking begins, and from that pool only the top 20 to 30 results reach the deep-learning layer.

That runs against how most SEO commentary describes Google. The industry treats RankBrain, BERT, and other deep learning components as the definition of how Google ranks. Under oath, Nayak described them as expensive optional layers applied to a narrow window that classical retrieval has already culled.

Every practice in this industry that treats the top 20 to 30 as the competitive surface assumes it’ll stay that size. The testimony makes clear that the assumption is contingent, not foundational. The number could have been 50 or 500. It landed at 20 to 30 because that’s what Google’s hardware budget would support, and the constraint has held.

The constraint that held the number there is now in public view, and Google has published what comes next.

The wall and the algorithm that climbs it

On April 7, Sundar Pichai sat down with John Collison and Elad Gil on the Cheeky Pint podcast and described a set of hard supply constraints that no amount of CapEx will solve in the short term. The operative line: 

  • “To be very clear, we are supply-constrained. We are seeing the demand across all the surface areas.”

Pichai named five specific bottlenecks: wafer starts at the foundries, memory, power and energy, permitting for data centers, and skilled labor. Of the five, he pressed hardest on memory: 

  • “There is no way that the leading memory companies are going to dramatically improve their capacity.” 

For the 2026 to 2027 horizon, Google can’t buy its way past the memory bottleneck. Higher prices won’t create more capacity.

That matters because nearest-neighbor vector search, the mechanism behind modern semantic retrieval, is memory-bound. The wider the set of candidate pages a system can consider, the more memory it needs. The tight coupling between memory supply and retrieval breadth is what sets the cost boundary Nayak testified about.

On March 24, two weeks before the Cheeky Pint episode, Google Research published a blog post describing a technique called TurboQuant. The corresponding arXiv paper, “TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate,” was authored by researchers at Google Research, Google DeepMind, and NYU.

The headline claims:

  • 4x to 4.5x compression of vector representations with performance “comparable to unquantized models” on the LongBench benchmark.
  • Nearest-neighbor search indexing time reduced to “virtually zero.”
  • Outperforms existing product quantization techniques on recall.

The paper covers two applications: KV-cache compression inside Gemini, and nearest-neighbor search in vector databases. Most coverage has focused on the Gemini application. The search-stack application is the nearest-neighbor-search half, and it’s the one relevant to the cost boundary Nayak described. 

If indexing is virtually free and memory per vector drops by 4x, the economics that held RankBrain at 20 to 30 candidates no longer apply. A system running on the same hardware could plausibly evaluate a candidate set several times larger.

TurboQuant hasn’t been confirmed as deployed in Google Search. TechCrunch reported at the time of announcement that it remained a lab breakthrough, and the March 2026 core update carried no public commentary from Google linking it to retrieval efficiency or vector quantization. Google has published the algorithm but hasn’t yet deployed it.

Google has been running quantized vector search in production for years through ScaNN. TurboQuant extends that approach rather than introducing it.

The question has shifted from whether the cost boundary can be moved to what SEOs do before it moves.

What to do before the boundary moves

Waiting for SERPs to confirm that retrieval has widened before adjusting is the losing strategy. The competitive surface is shifting. By the time it’s visible in rank-tracking tools, the positioning work of the next cycle is already done.

Three practical shifts are worth making now.

1. Measure whether your pages enter candidate sets

Rank tracking tools measure position within the set. They say nothing about whether a page was eligible for the set in the first place. In classical Search the distinction matters less because the set is narrow. In AI-mediated retrieval, and in a wider RankBrain-style window once it arrives, the distinction is the entire game.

The fastest check is server log analysis. Two classes of retrieval user agents matter. 

  • Search index crawlers build the corpus AI systems pull from. Some examples include:
    • OAI-SearchBot (ChatGPT search).
    • Claude-SearchBot (Claude search).
    • PerplexityBot.
    • Applebot (which also feeds Apple Intelligence). 
  • User-driven agents fetch pages on demand when someone asks an AI model about a topic your page covers: ChatGPT-User, Claude-User, and Perplexity-User.
    • These don’t execute JavaScript, so they’re invisible to GA4 and any analytics tool that depends on client-side tags. If the pages you care about aren’t appearing against either list, they aren’t in the candidate sets those systems construct, and ranking work can’t put them there.

Get the newsletter search marketers rely on.


2. Audit pages for retrieval-friendliness separately from ranking-friendliness

Ranking and retrieval reward different properties. The ranking signals you already know include topical authority, link equity, and query-intent match. Retrieval systems look for something else: a clear, self-contained, citable claim that can be extracted and evaluated without reading the whole document. 

A page written for ranking often buries its main claim under context-setting, caveats, and SEO-driven preamble. In a retrieval-ready page, the claim sits in the first 100 words, attached to an entity or statistic a retrieval system can verify, and surrounded by evidence worth citing. Most sites we audit fail this test even when they rank well.

3. Stop treating the top 20 to 30 pages as a fixed target

The window is a hardware constraint that has held for years because no one at Google could afford to widen it. Briefing content against “what ranks in positions 1 to 10 for this query” is briefing against a snapshot of a window that’s narrower than it needs to be because of hardware economics. 

When the economics change, the window will widen. Content built to compete inside a narrow set will face broader competition once it expands. The margin goes to content that was strong enough to enter a wider candidate set from the start.

None of the three requires predicting when TurboQuant or its descendants ship to production. They require acknowledging that retrieval economics is moving and positioning for what lies on the other side of the move, rather than for the current snapshot.

2026 is a year of change for SEO

The test is simple. Pull your server logs for the last 30 days. Count the retrieval user agents that have hit the pages you care about. If the answer is zero, or close to it, no amount of ranking work will move that number.

The competitive surface is shifting under you. The rest follows.

Why Google Ads, GA4 and CRM numbers never match

Why Google Ads, GA4 and CRM numbers never match

Are you planning your PPC channel budgets by comparing Google Ads, Meta Ads, GA4, and your CRM/CMS data? Since those data don’t align, what do you report on? And how do you make sure you’re optimizing for real impact?

If you think you need better tracking, cleaner UTMs, and maybe a more sophisticated analytics setup, you’re not alone. But more often than not, the issue is something else entirely. Let’s call it the attribution trap.

The main problem is that an entire generation of marketers has been taught to be data-driven. If configured correctly, analytics tools are supposed to tell you what’s working. Just follow the data.

But attribution can quickly become misleading. Without the right framework, marketers end up allocating budgets based on incomplete insights, often with damaging business consequences.

Let’s step back for a moment: Attribution allocates conversion credit to channels. That’s useful. However, attribution can’t tell you which of those conversions your channels actually caused.

Does this sound overly academic? It isn’t. Understanding this distinction is key to fixing the measurement problem. So let’s look at why attribution fails, how to triangulate your existing data, and whether incrementality testing is the right next step for your client.

Why ads, analytics, and CRM numbers never match

Before fixing anything, you need to understand that aligning ad networks, GA4, and your CRM simply isn’t possible. These systems were built for different purposes, use different methodologies, and measure different moments in the customer journey.

Your customer journey as a framework

Say someone clicked a Meta Ads ad, got retargeted on YouTube, then searched for your client’s brand on Google before converting — all within seven days.

Using the default attribution windows, both Meta and Google Ads will report one conversion. GA4 and your CRM will only show one, most likely crediting Google Ads paid search.

Did Meta Ads invent that “duplicate” conversion? No. Meta Ads has no visibility into Google Ads interactions. How could it know the conversion was supposedly a duplicate?

Conversely, GA4 and your CRM will almost certainly ignore Meta Ads. Should you follow those “insights” and reallocate Meta Ads budget to Google Ads branded search? Probably not.

Structural differences as diagnosis enhancers

Unfortunately, it doesn’t stop there:

  • Attribution date: Ad platforms attribute conversions to the day the click occurred, while GA4 and CRMs typically report on the day the conversion happened. If your customer journey is long, that creates additional discrepancies.
  • Cross-device behavior: A user who clicks a Google Ads ad on mobile, returns on desktop through SEO, and converts will generate a conversion across ad, analytics, and CRM tools. So far, so good. But Google Ads and your CRM will disagree on the source because your CRM won’t have “merged” the mobile and desktop visitors into one user.
  • Privacy restrictions: Ad blockers, browser-level tracking prevention, and cookie consent banners often mean a large share of conversions isn’t measured. Sometimes ad networks fill that gap with modeled conversions, but your CRM still won’t see the actual source.

The latter two issues are fixable through better configuration, especially server-side tagging, offline conversion imports, and consistent UTMs. But the structural divergence remains, so you can’t expect 100% correlation between those tools.

Your single source of truth: The attribution trap

Once teams accept that the numbers differ, the next move is often choosing a single source of truth — oftentimes GA4 or the CRM — and sticking with it. That’s where the attribution trap closes.

Every tool follows an attribution model. And whatever the model — first-click, last-click, linear, time decay, or data-driven — it’s fundamentally limited.

Every attribution model has blind spots

  • Last-click. The easiest model to understand. Also the easiest to game. It rewards the final touchpoint, typically branded search, and systematically undervalues demand generation.
  • First-click. The opposite. It rewards discovery and ignores the touchpoints that moved someone from interested to converted.
  • Linear and time-decay. They feel more balanced, right? True. But they’re also largely arbitrary. Why should equal credit go to every touchpoint? Why should recency determine value? Customer journeys don’t follow strict rules.
  • Data-driven. This model is often presented as the most sophisticated option. Trust the ad network or analytics platform to identify the attribution model that best reflects reality. In practice, it’s still a black box. If it were truly that reliable, platforms would provide more visibility into how it works.

What happens depending on your source of truth

Hopefully, you now have a better grasp of the deeper issue. Attribution answers this question: Given that a conversion happened, which touchpoints should get credit? By narrowing your decision-making process to a single tool, you can’t escape the blind spots of whichever attribution model it follows.

If you rely solely on your CRM, you’ll be driven by last-click attribution, meaning you’ll mostly focus on branded search. A few years later, you may realize demand has dried up despite strong results according to your single source of truth.

On the opposite end of the spectrum, relying only on ad platform data means reporting inflated results. Think 2x, 3x, or even 4x more revenue than what the finance team actually reports. You end up increasing marketing budgets while finance tells you to stop — rightfully so.

Again, GA4 sounds like the grown-up in the room. Not quite. That’s because it only measures the on-site portion of the customer journey. What about awareness campaigns designed to generate views or ad recall? They don’t necessarily generate website visits.

Once you realize all these tools have fundamental flaws and blind spots, someone will inevitably suggest incrementality. In other words: Did this campaign cause conversions that otherwise wouldn’t have happened? Let’s look at that for a moment.

Incrementality tests: The perfect solution?

Incrementality measures the results generated because of your campaign — conversions that wouldn’t have existed without the ad. 

Think of two parallel universes: the gap between the world where the ad ran and the world where it didn’t is your incremental impact. Everything else is activity you would’ve captured anyway.

Attribution vs. incrementality

This matters more than it might seem. A significant share of reported campaign conversions — especially in retargeting and branded search — comes from people who would’ve converted regardless. They were already in-market, already familiar with your brand, and already close to a decision.

Showing them an ad and then claiming credit for the conversion is what attribution does. Incrementality testing measures how much of that credit is real.

For budget decisions, that distinction is everything.

A retargeting campaign reporting strong ROAS through attribution might deliver almost no incremental value. Cut it, and conversions barely move. Keep it, and you’re paying for the illusion of performance in that “single source of truth.”

How to test for incrementality

Incrementality testing requires experiments with two groups: one that sees the ad and one that doesn’t. Then you measure the difference in outcomes. Here are the most common approaches:

  • Geo holdout. Divide your market into comparable geographic regions, run campaigns in some while going dark in others, and measure the difference in conversions. It’s practical, reliable, and relatively easy to set up.
  • Audience holdout. Platforms like Google and Meta let you create a holdout group — a percentage of your target audience intentionally excluded from seeing ads. From there, the process mirrors geo holdout testing. One major caveat: It relies on ad platform data. That means you should only compare incrementality across campaigns within the same ad network. Otherwise, it’s pointless.
  • Time-based testing. Pause a campaign for a defined period and measure what happens to overall conversion volume. If performance holds, the campaign likely wasn’t incremental. This approach is high-risk: seasonality, competitors, and external events can blur the results. And if the campaign was incremental, you’ve just hurt performance during the test period.

Get the newsletter search marketers rely on.


Is incrementality right for you?

If you’re running larger budgets — think roughly €1 million per month or more — you’re probably already familiar with these concepts. So let’s assume you’re operating at a smaller scale.

In that case, incrementality often isn’t actionable. Reliable tests require meaningful differences between test and control groups, which means large amounts of data. And generating that data requires significant spend.

That said, you can still use shortcuts for likely problem areas, especially branded search. Check the auction insights report to see whether competitors are heavily bidding on your brand. If they are, you probably need branded search campaigns to capture the demand you created. If they aren’t, you can likely pause those campaigns, let SEO capture the demand, and save some ad spend.

That said, you can still use shortcuts for likely problem areas, especially branded search. Check the auction insights report to see whether competitors are heavily bidding on your brand. If they are, you probably need branded search campaigns to capture the demand you created. If they aren’t, you can likely pause those campaigns, let SEO capture the demand, and save some ad spend.

Triangulation: The actionable decision-making process

So if attribution is fundamentally flawed and incrementality is mostly reserved for top-tier advertisers, what’s left? Triangulation.

Use the tools you already have while staying aware of their inherent flaws. And educate clients or leadership teams so they don’t blindly follow a “single source of truth.” Here’s what it looks like in practice.

Start with your CRM/CMS

Those systems record actual deals and revenue. Treat every other number as an attempt to explain them.

When Google Ads and Meta Ads report a combined $50K in revenue, while Shopify shows “only” $35,000, Shopify reflects reality.

Better yet, it’s the only system that can reliably tell you whether a conversion came from a new or existing customer. Ad platforms don’t make that distinction reliably. That lets you measure nCAC (new customer acquisition cost), anchoring budget decisions around customers who otherwise wouldn’t have found you.

Then superimpose your customer journey onto ad platform results. That $15K gap represents the ad platforms’ interpretation of their contribution. Your job is to understand each campaign in the context of the customer journey and identify where deduplication is needed.

For example, if you run both Demand Gen and Meta retargeting campaigns, there’s almost certainly overlap. So will be the results. That’s when time-based incrementality tests, if available, can help determine which channel performs better.

Improve on triangulation

Attribution windows: Long customer journeys make performance harder to interpret. Try segmenting campaigns around specific stages of the customer journey and adjust attribution windows and micro-conversions accordingly. Smaller attribution windows are often better at driving the right outcomes when configured properly.

Track ratios: The gaps between ad platform conversions and CRM/CMS data should remain relatively stable. Build a simple report that tracks those relationships over time. If the ratios hold, your measurement framework is stable. If they break, investigate — there may be an incrementality insight hiding there.

Triangulation won’t give you a single clean number. But it will give you a defensible, consistent framework for making decisions. That’s far more valuable than false precision.

Welcome to the real world

The teams that waste the most time on measurement are the ones trying to force three systems to produce the same number, or searching for the attribution model that finally feels fair.

The teams that make the best decisions accept that reality is more complex than a single source of truth and build the data skills needed to reflect that complexity.

So make sure your decision-making process is as close to reality as possible — and embrace the question marks.

Why PPC AI agents fail without business data

Every few weeks, someone publishes a piece about AI agents taking over Google Ads, SEO, or social media. Inevitably, the agents look impressive — in theory, at least.

But then you dig deeper to determine what data the agent is working with. Almost always, the answer is the same. These agents typically work with data that’s native to the platform. For Google Ads, that means impressions, clicks, conversions, and return on ad spend (ROAS).

This oversimplified approach is the reason AI agents in PPC often fail at the input layer, before they’ve made a single decision. An agent that has access to platform-native data only can’t truly manage your marketing.

Why many PPC agents are just AI assistants

Many tools positioned as PPC agents are simply AI assistants that write ad copy. They handle tasks like:

  • Generating 10 headline variants.
  • Describing a product image for a Responsive Search Ad (RSA).
  • Drafting call to action (CTA) options for a Performance Max (PMax) asset group.

These are genuinely useful tasks that save time. But they aren’t agentic PPC. Instead, they’re generative AI tools with a Google Ads wrapper.

A true PPC agent acts on the ad account. It analyzes performance data to make informed decisions. Then it applies the analysis to implement changes such as budget shifts, bid adjustments, negative keyword additions, campaign structure modifications, and feed-level optimizations. 

How AI agents for PPC inadvertently create a closed loop

Google Ads has limited insight into your business data. So, when you build an AI agent that factors in only Google Ads signals, you end up optimizing a closed loop.

This causes your agent to focus on hitting targets that often have nothing to do with business performance. In some cases, the agent may negatively impact the business while improving its own reported metrics.

For example, Google Ads doesn’t know your average deal size, sales cycle length, or cash position this month.

The ad platform lacks data on which product lines currently have margin worth defending. And it doesn’t know that a campaign generating 40 leads per week is producing zero qualified opportunities or that a campaign with a mediocre ROAS is your most profitable acquisition channel once you factor in customer lifetime value.

Performance Max established a dangerous precedent

This isn’t a new problem. PPC managers have been navigating the tradeoff between ROAS and profit for years. PMax surfaced this problem long before AI agents entered the conversation.

PMax campaigns operate as a black box. You provide Google with your budget, assets, and conversion goal. Then, you let the algorithm decide where to spend.

Advertisers quickly discovered that without margin data, customer relationship management (CRM) signals, or conversion insights, PMax would enthusiastically optimize toward the wrong outcome.

It would chase cheap conversions that probably would have converted anyway, deprioritize high-margin products in favor of high-volume ones, and hit the ROAS target while missing the profit goal.

PPC agents risk misalignment in the absence of business data

AI agents for PPC amplify the speed and scale at which a misaligned optimization loop can do damage.

Before you invest in an AI agent, consider that PM, built by the largest digital advertising company in the world and trained on more data than any independent agent ever will have, still can’t make good decisions without backend business data.

Your agent is no different. Incorporating a large language model (LLM) doesn’t fix the underlying architecture problem. To optimize PPC campaigns toward business goals, your agent needs relevant business data.

Dig deeper: Agentic PPC: What performance marketing could look like in 2030

Get the newsletter search marketers rely on.


3 types of business data for high-performing PPC AI agents

These three types of business data — CRM, product, and operational — are key to improving PPC agent performance.

1. CRM data

The most critical missing layer for lead generation accounts is CRM data. Without it, an agent that targets conversions bids on form fills without any idea what those outcomes are worth.

There are two practical ways to close this gap and connect CRM data.

Offline conversion tracking

Offline conversion tracking (OCT) involves exporting qualified leads or closed deals from your CRM and pushing them back into Google Ads as offline conversion events, ideally with assigned values. 

This gives Smart Bidding a useful signal to work with. With OCT, an AI agent that analyzes conversion data from within Google Ads gets something that reflects business reality rather than just form volume.

OCT is a lighter-touch option that offers a realistic starting point, particularly for agencies managing multiple accounts. It doesn’t require direct CRM integration with the agent. The data flows into Google Ads on a delay (typically 24 to 72 hours), flowing revenue-weighted signals into the system the agent already reads.

Direct CRM access

The second path involves giving the agent direct CRM access. This way, it can query deal stages, average contract values by campaign source, win rates by lead type, and time to close by channel.

Direct CRM access unlocks a more intelligent decision layer.

No longer dependent on conversion data imports, the agent can assess pipeline health in real time. For instance, it might detect that a campaign is generating volume but the leads are stalling at proposal stage — and then flag that for human review or adjust targets accordingly.

Compared to OCT, direct CRM access is harder to build and maintain. But it allows an agent to make business-aware decisions rather than using platform data alone.

2. Product margin data

Ecommerce accounts running Shopping or PMax campaigns with a product feed need access to product margin data. Yet these insights almost never exist natively inside Google Ads.

Google Ads knows the product cost, conversion rate, and reported revenue for everything in the product feed.

But it doesn’t know that product A has a 55% gross margin while product B has a 12% margin after factoring in fulfillment and returns — despite having a higher ROAS. An agent optimizing for ROAS in this environment will naturally bid for product B conversions while starving product A.

That’s why a properly connected Shopping agent should have margin data at the product or category level, fed directly via a supplementary feed or accessible via a backend data connection.

With product margin data, the agent can set differentiated target ROAS values by margin tier, suppress spend on structurally unprofitable SKUs, and prioritize budget toward the lines the business wants to grow.

An agent that can read inventory levels and margin data can also dynamically adjust custom labels, pull products from active campaigns when stock is critically low, and reprioritize when a high-margin product returns to supply.

3. Operational data

Operational signals (e.g., fulfillment capacity, seasonal staffing constraints, promotional windows) also affect whether an agent’s decisions hold up in practice. When you aggressively bid into a product line you can’t fulfill, you quickly burn budget and decrease customer satisfaction.

For instance, say your agent scales campaign spend because performance looks strong. But the warehouse team is already at capacity and can’t fulfill the orders in a timely manner. This decision might seem optimal in theory, but in practice, it lacks context.

Operational signals rarely come from a clean API. Instead, they’re stored in enterprise resource planning (ERP) systems, manual exports, and internal dashboards with no standard integrations.

This data can be challenging to extract. And getting the upstream coordination right can prove even more challenging.

After all, an agent is only as organized as the humans that provide the context.

Marketing teams often struggle to coordinate promotions, sales pushes, and seasonal campaigns with other departments, agencies, and external partners. These initiatives happen constantly, with details communicated via email threads, Slack messages, and spreadsheets that no agent will ever see.

Adding an autonomous system to this setup just accelerates the confusion. That’s why for many organizations, the first step is simplifying operational data.

Why PPC agent implementations often skip business data connections

Backend data connections tend to be time-consuming to build and expensive to maintain. They often require syncing with a range of ecommerce, bookkeeping, inventory management, CRM, and ERP platforms.

Plus, every implementation is a custom job that often requires API connections or a data warehouse layer. It also requires buy-in from finance, operations, and sales teams that have their own systems, formats, and priorities.

As a result, agencies and in-house teams that build AI agents for PPC often take the path of least resistance. They connect to the API, pull the standard metrics, and build the automation without providing additional context.

This approach is faster to ship and easier to demonstrate. It also avoids the internal politics of touching finance data.

The result is a layer of automation that looks impressive but provides an incomplete picture of business reality, leading to performance that drifts in the wrong direction.

The current AI agent ecosystem doesn’t reward anyone for solving this problem.

  • Agencies are paid to manage ad accounts, not to build data pipelines into client ERP systems.
  • Tool vendors want you dependent on their connector layer, not on custom integrations you own.
  • In-house teams rarely have the political capital to touch finance or operations systems. And even when they do, the procurement cycle alone can outlast the enthusiasm for the project.

The incentive structure points everyone toward quickly shipping something that looks like an AI agent, rather than building something that works in real business conditions.

What to ask before you build an AI agent for PPC

Before investing time or budget in developing an AI agent for Google Ads, clarify what business data the agent needs to optimize performance.

For lead generation accounts, the answer starts with OCT as a minimum viable data bridge, with direct CRM integration as the ideal architecture worth building toward. For Shopping and ecommerce, it starts with margin data at the SKU or category level and extends to inventory and fulfillment signals. And for all campaign types, operational data is critical.

Creating a functional PPC agent is the easy part. Connecting it to reality is where you have to put in the work and where you extract genuine value.

Dig deeper: Agentic AI and vibe coding: The next evolution of PPC management

The legal consequences of using AI — and the safest way to do it

AI regulations are still in their infancy. Europe has taken the lead with the EU Artificial Intelligence Act. In the United States, nearly 20 states have enacted AI legislation. At the same time, federal policymakers have signaled interest in limiting state-level regulation to keep the overall regulatory environment relatively light, as shown by the recent AI policy wishlist published by the White House.

Regardless of how quickly new regulations emerge, one thing is clear: AI isn’t reinventing the legal landscape; it’s accelerating it. Most AI risks trace back to familiar areas like intellectual property, privacy, contracts, consumer protection, discrimination, and liability when things go wrong.

So instead of thinking of “AI law” as something entirely new, it’s more helpful to look at the core business areas where these familiar risks tend to arise.

The 9 areas where AI risk lives in an organization

The following nine areas are where most AI risk shows up inside a business. You don’t have to be a legal expert to manage these risks; you just have to ask the right question in each area to get to the heart of the matter and address it well.

1. Intellectual property

The one question: Who owns the work, and are we accidentally using someone else’s intellectual property without realizing it?

Ownership is still evolving in the AI context, but we do have some early guidance. The U.S. Copyright Office (USCO) stepped in early, stating that works created purely by AI are not protected. Meaningful human authorship is required. If a human plays a substantial creative role in shaping an AI tool’s output, protection may still be possible. Such determinations are to happen on a case-by-case basis.

On the patent side, the U.S. Patent and Trademark Office’s (USPTO’s) revised guidelines show a slightly more flexible position, stating that patentability is still possible if a human conceived the idea but used AI to make the idea come to life. That said, these guidelines haven’t been tested in court, so it’s unclear how they will stand up against real-world applications.

At the same time, concerns about infringement continue to grow. Many generative AI tools were trained, at least to some extent, on protected materials, and we’re watching this tension play out in real time. We’ve seen case filing after case filing, including The New York Times lawsuit against OpenAI and Microsoft, which alleges that the AI tools reproduced substantial portions of copyrighted content without permission.

This creates two practical risks:

  • Using AI outputs that unintentionally incorporate protected material.
  • Struggling to prove ownership over work that lacks sufficient human input.

If you’re creating content you want to own, protect, or commercialize, keeping a human meaningfully involved isn’t optional — it’s essential.

2. Advertising and misinformation

The one question: What are we saying, and is it accurate?

AI tools make it dramatically easier to create content at scale, which is a clear upside. The tradeoff, however, is that these tools also make it easier to publish something that’s misleading or incorrect.

We saw in real time how costly such errors can be. During Google Bard’s product demonstration, the tool incorrectly stated that the James Webb Space Telescope had taken the first images of an exoplanet. This one error cost Google $100 billion in market value because it raised serious questions about the credibility of its tool.

AI hallucinations can show up in subtle ways, including incorrect data, fabricated citations, false logic, exaggerated claims, and confident but flawed reasoning. When such content is published under your brand, it becomes your responsibility. And while your company may not have as much at stake financially as Google does, reputationally, one mistake can absolutely cost you.

3. Privacy and personal data

The one question: Are we using people’s personal information in ways that are transparent, lawful, and respectful?

Consumer expectations around data privacy have shifted dramatically — and the law is catching up. Frameworks like the EU’s GDPR, Canada’s PIPEDA, and California’s CCPA have established new standards around how personal data is collected, used, and disclosed.

While marketers have adapted (begrudgingly, to a degree), personal data remains at the core of many campaigns. That data includes cookies, pixels, contact and behavioral data, purchase and payment information, and more. And the risks don’t just arise in collecting the data; they also arise in failing to clearly communicate what you’re doing with it.

Regulators have already shown us how seriously they take these matters. In ChatGPT’s early days, Italy blocked the app countrywide over concerns about how personal data was being collected and processed under GDPR. The Italian government only lifted that ban after OpenAI added more privacy safeguards.

At a practical level, your company needs a clear policy on the collection and handling of private consumer data. You need to know what data you’re collecting, where that data is going, and who is handling it. Your team needs to know which privacy laws apply to your company and its customers, and how to respond if a customer makes a request under those laws. If you can’t quickly and clearly communicate that your company knows all this, now’s the time to start taking action so you limit your exposure.

4. Data protection and trade secrets

The one question: Are we keeping sensitive data, internal knowledge, and company secrets out of places they shouldn’t go?

When we talk about data protection, the focus often stays on customer data. Just as important, however, is company data, especially trade secrets and proprietary information.

AI tools introduce a new layer of risk here, particularly when employees use unapproved tools or free versions that lack privacy and security guardrails. Samsung learned this lesson the hard way. A couple of engineers pasted proprietary source code into ChatGPT while troubleshooting issues. That data was then transmitted to an external system, which would use the data to train its models and potentially deliver replicated source code in future outputs.

This isn’t a case of bad actors; it’s a case of bad workflows and SOPs. If your team is using AI tools without clear guardrails, you risk any team member unintentionally disclosing confidential business information, client data, or proprietary processes or code. And once that information goes out, it’s incredibly difficult to get it back.

5. Employment and workplace fairness

The one question: Could AI be influencing hiring, promotion, or evaluation decisions in ways that create bias or discrimination?

For years, companies have been relying on AI in hiring and HR processes, primarily to improve efficiency. But such efficiency doesn’t guarantee fairness.

Research and real-world examples have proven time and again that these tools bake in the prejudices and biases of their training data. One well-known example comes from Amazon, which scrapped its 2018 AI hiring tool that was found to downrank resumes that included indicators of applicants being women. In another case, iTutorGroup was held liable for damages after its AI-powered job-application software exhibited bias against older candidates.

It’s not that using AI in these instances is unacceptable. It’s just that companies using AI should not do so blindly. When it comes to having AI tools partake in decisions about people, your company needs to regularly audit the tools for bias, understand how the tool’s decisions are being made, and always keep a human in the loop.

6. Contracts and customer expectations

The one question: Are our customer-facing agreements clear about how AI is used—and who’s responsible if something goes wrong?

AI-generated content isn’t just “content.” In many cases, it’s part of your customer experience, which carries great weight.

The Air Canada chatbot story offers a good example. A customer relied on information provided by an AI chatbot on the Air Canada website. The chatbot described a bereavement fare policy that didn’t actually exist. Air Canada refused to honor the policy; the customer sued. A Canadian tribunal ruled that the airline was responsible for the chatbot’s statements.

Your website, chatbots, automated content, AI-generated social media content, and so on can all be considered company-created and company-approved content. And if we follow the Canadian tribunal’s logic, if the content lives on your platform, it’s your responsibility.

If customers rely on the content you provide to make decisions, you need to ensure that the content is accurate. You should also take care to clearly address how AI is used on your platform and where responsibility for it sits.

7. Vendor and AI tool risk

The one question: Do we really understand the risks of the AI tools we’re bringing into the business?

Every AI tool you use comes with its own ecosystem: third-party integrations, underlying libraries, and data flows that aren’t always visible on the surface. If you don’t understand that ecosystem, you’re taking on risk. And no company, small or large, is immune.

In 2023, a ChatGPT bug briefly allowed some users to see titles of other users’ chat histories and certain subscription payment details. The issue was traced to a bug in an open-source library used by OpenAI, highlighting how risk can live deep within a tool’s infrastructure.

This risk extends beyond the tools you choose to the vendors you work with.

  • Which tools do your vendors use?
  • How well do they understand the privacy and data protection policies that are in place?
  • Do their practices align with yours?
  • And if a vendor’s AI use leads to a problem, are you liable, or is the vendor liable?

Companies cannot blindly enter new vendor relationships or AI tool subscriptions. Initial assessments are necessary, as are ongoing reviews and, if necessary, corrective actions to remain compliant and limit risk.

8. Product liability and AI decision risk

The one question: If an AI system makes a mistake that affects customers or users, who is responsible?

AI systems redistribute risk in ways we can’t always predict. Zillow’s Zillow Offers program is a strong example. The company used automated algorithms to estimate home values and guide purchasing decisions. When those models misjudged market conditions, the company purchased homes at inflated prices, ultimately causing the company to lose hundreds of millions of dollars.

Zillow’s algorithms impacted external parties by inflating home prices. But its internal impacts were even harsher. It raised questions, including those relating to accountability. Who is at fault? And what consequences will the responsible parties face, if any?

These aren’t theoretical questions; they’re governance questions. And organizations that spend time addressing these questions upfront find it much easier to address solutions should a system make a mistake in the future.

9. Regulatory compliance and governance

The one question: Are we keeping up with evolving rules, and can we demonstrate we’re using AI responsibly?

Regulators aren’t waiting for a comprehensive AI law to emerge. Unsteady, they’re applying existing frameworks as they can, and are already taking action.

The U.S. Securities and Exchange Commission (SEC) and Federal Trade Commission (FTC) have brought enforcement actions against companies for failing to bake in proper guardrails around their use of AI. The SEC has charged numerous firms with making misleading statements about their use of AI or falsely advertising their AI capabilities (“AI washing”). The FTC has also issued numerous warnings to companies about overstating or misrepresenting their AI capabilities, as AI claims must be substantiated like any other marketing or advertising claims.

Enforcement is also expanding beyond messaging. The FTC took action against Rite Aid over its facial recognition technology, which produced thousands of false positive alerts and disproportionately impacted people of color.

This action, while important for consideration of disparate harm, signaled a shift in what regulators are looking for. It’s not just about what your AI systems do; it’s about how your organization governs data, vendors, and risk.

When regulators come calling, they won’t just ask what happened. They’ll ask how you govern it. And they’ll want the receipts.

What this likely means for the future

No one can tell you how any of this is actually going to play out. That said, where things stand does help shed light on how the legal landscape will impact your day-to-day business operations in the near future.

More lawsuits, across more industries

Expect litigation to increase as AI use expands. Courts will play a central role in clarifying how existing laws apply to new AI‑driven scenarios, especially where regulations are vague or silent. These cases will help define boundaries, but they will also introduce cost, delay, and uncertainty for businesses caught in the middle.

More formal requirements and internal guardrails

Marketing organizations should plan for growing expectations around disclosures, documentation, and process. This includes clearer customer‑facing policies, internal SOPs governing AI use, bias audits, risk assessments, and incident response plans. In practice, responsible AI use will increasingly look like a compliance discipline, not an ad‑hoc experiment.

A growing need for privacy and data protection expertise

AI tools are evolving quickly, and they also make malicious activity easier and more scalable. That combination raises the stakes. Companies will need dedicated teams or well-defined ownership to monitor developments, maintain policies, and respond to incidents as they arise. Privacy and data protection will be core operational functions, not side considerations.

Ongoing uncertainty, by default

There is no final version of AI regulation on the horizon. Rules will continue to change, sometimes unevenly and unpredictably. The most resilient organizations will be those that plan for what they can, learn from early missteps, and remain flexible enough to adapt as expectations shift.

Introducing the ‘safest legal way to use AI’ playbook

Listen, we know what you’re thinking: boring. Legal guardrails, policies, and governance are not shiny or sexy. Experimentation is. Speed is. Seeing what these tools can do is genuinely exciting. But we care more about you and your company coming out ahead than chasing short‑term wins that create long‑term problems.

This playbook isn’t about slowing innovation. It’s about protecting your team, your work, and your organization so you can use AI confidently, responsibly, and without unnecessary risk getting in the way. With that, let’s dive in.

1. Start with a clear AI use policy

Every organization should have a short, plain-language policy that explains how AI tools can and cannot be used. The policy need not be overly complex, but it should be clear enough that any team member can read it and follow it as intended.

A strong policy usually includes:

  • Which tools are approved for use (and which have been rejected and why).
  • What types of data can be entered into AI systems.
  • When human review is required before publishing AI-generated content.
  • Situations where AI use should be avoided entirely.
  • A prompt library, along with prohibited prompts.

As you build your policy, remember to include an approved tools list, a list of prohibited tools, an acknowledgment form for employees to sign, and disclosure guidance for when AI-generated content is used.

These are the pieces that put policy into action.

2. Separate AI workflows by risk level

Not every AI use case carries the same level of risk, so treating everything the same either slows your team down or leaves your company exposed. A simple way to manage this is to think in terms of a three-lane highway:

  • Green lane: Brainstorming, outlines, tone variations (no sensitive data).
  • Yellow lane: Internal drafts + summaries (allowed data only, reviewed).
  • Red lane: Hiring decisions, regulated info, public claims, legal advice, medical claims (requires legal/privacy review + logging).

This approach allows your team to move more fluidly, slowing down only where necessary based on defined goals. The key term here is “defined.”

You’ll need to clearly define which activities fall under each lane, and what level of review or approval is required before anything moves forward.

3. Use ‘clean inputs’ and ‘clean outputs’

Most AI risk actually starts at the input stage. If sensitive, protected, or proprietary data goes in, you lose control over where it may appear later. That’s why it’s critical to set guardrails in place around both what goes in and what comes out.

Example guardrails include:

  • Avoid pasting proprietary documents into consumer AI tools.
  • Use trusted internal knowledge sources where possible.
  • Require citations or sources for factual AI-generated content.

Clean inputs reduce risk. Clean outputs protect your brand.

4. Review AI vendors and tools carefully

It’s easy to get caught up in the excitement of new AI tools. But the desire to join in often leads organizations to adopt tools before proper evaluation. This is where risk starts to creep in.

Every external tool or vendor you bring into your company also brings its data practices, dependencies, and potential exposures. Make it a policy to ask questions that identify risk before adopting a new tool or hiring a new vendor.

Ask and then document the answers (ideally in your vendor contracts) to questions such as:

  • Does the vendor train their models on customer data?
  • How long is data retained?
  • What security standards are in place (SOC 2, ISO 27001)?
  • What happens if an IP or data breach issue arises?

Remember, risk doesn’t happen in a vacuum or at any single point in time. Review tools and vendors regularly.

5. Bake in human oversight and review

AI is great for accelerating work, but it doesn’t grant a free pass from accountability. At key points in your workflows, there should be clear expectations around when a human needs to step in, review, and take responsibility for the outcome.

This is especially important for:

  • Public-facing content.
  • Customer communications.
  • Regulated or high-stakes decisions.

Keeping a human in the loop isn’t about slowing things down. It’s about ensuring that speed doesn’t come at the cost of accuracy, fairness, or trust.

6. Document your governance

“Radical transparency” is the phrase of the day in many AI, data protection, and privacy conversations. What that really boils down to is simply being able to show your work. 

Because when something goes wrong, or when a regulator comes knocking, you’ll need to be able to clearly show how your organization responsibly uses AI.

To that end, we recommend every organization:

  • Maintain an AI tool inventory.
  • Document risk assessments for higher-risk use cases.
  • Record review steps for public-facing AI outputs.
  • Create an incident response plan for AI-generated errors.

This documentation protects your business. But perhaps more importantly, it provides your team with the clarity and consistency it needs to perform well.

7. Train your team

Once you have the documentation in place, you have to take the next step to ensure your team understands how to apply your policies and procedures. Training should equip your team to identify risks, respond to threats, and otherwise use AI tools in line with your expectations.

At a minimum, your training should ensure your team knows how to:

  • Use approved AI tools effectively.
  • Recognize phishing attempts, deepfakes, and other AI-driven threats.
  • Protect work computers against AI-driven information disclosure attacks.
  • Build AI tools like chatbots to protect against prompt injections.

By bolstering your team’s AI proficiency, you’re setting your company apart from the competition and eliminating significant risk along the way.

This post first appeared on the author’s website and is republished here with permission.

Winning the next era of local visibility: How AI is changing local search by SOCi

AI-powered experiences like Google AI Overviews, Gemini, and Ask Maps are changing how customers discover local businesses. People are asking more detailed, conversational questions, and AI-powered systems can now influence which businesses get surfaced.

Traditional rankings are only part of the visibility equation. Complete, accurate business information — including your Google Business Profile, reviews, photos, and local content — can help customers and AI systems better understand your brand.

Join SOCi and Google for an exclusive webinar, Winning the Next Era of Local Visibility, on June 3.

You will learn:

  • How AI is transforming local search.
  • Which signals may influence AI recommendations.
  • How to improve visibility across Search, Maps, and Gemini.
  • What Ask Maps means for your brand.

AI is already shaping how customers find businesses. Make sure yours is one of them.

Register now

Veronika Höller talks on a perfectly set-up but poor performing campaign

In this episode of PPC Live The Podcast, I sit with Veronika Höller to unpack a real-world PPC mistake — from campaigns that looked perfect on the surface to the deeper issues that were quietly killing performance.

From “perfect” campaigns to zero revenue

Veronika Holler didn’t walk into a broken account. Quite the opposite. Everything looked right — clean structure, strong creatives, solid budgets, conversions coming in. On paper, it was a high-performing PPC setup.

But there was one problem: it wasn’t driving revenue.

That disconnect forced a deeper look beyond surface-level metrics. Because while impressions, clicks and conversions were ticking up, the campaigns weren’t actually delivering business impact — and that’s where things started to unravel.

The real issue: nothing stood out

The turning point didn’t come from inside the account. It came from looking outside it.

During competitor research, Veronika realised the brand sounded just like everyone else. The messaging blended into the market. There was no clear reason for a user to choose them over competitors.

From a user perspective, the ads weren’t wrong — they were just forgettable. And in a crowded category, “good” isn’t enough.

That insight reframed the entire problem: it wasn’t a performance issue. It was a positioning issue.

Starting again — from scratch

Instead of tweaking the existing campaigns, Veronika made a bold call: rebuild everything.

That meant new messaging, new creatives, and a new strategic foundation. One key shift was defining not just ideal customers, but also who they didn’t want to target — using anti-ICPs to sharpen the messaging.

They also introduced stronger localisation, tailored landing pages by market, and platform-specific strategies instead of copying campaigns across channels.

It wasn’t optimisation. It was a reset. And it worked.

The mistake that nearly broke everything

But earlier in her career, Veronika made a far more painful mistake — one that many PPC marketers will recognise.

She applied a recommended target CPA… without increasing the budget.

The result? Campaigns stopped delivering. Performance tanked. And worst of all, it went unnoticed over a weekend.

By Monday, the damage was clear — and the client was not happy.

Owning the mistake — and fixing it fast

There was no hiding from it.

Veronika immediately admitted the mistake, explained what happened, and took responsibility. That honesty changed the outcome. While the client was initially frustrated, the situation de-escalated quickly because there was no deflection — just a clear plan to fix it.

The lesson stuck: don’t blindly apply recommendations, and always understand the full context before making changes.

Why failure is part of getting good

For Veronika, mistakes aren’t something to avoid — they’re essential.

“You can only be good if you fail,” she said.

That mindset now shapes how she works and how she mentors others. Mistakes aren’t a sign of incompetence — they’re a sign that work is being done, tested, and improved.

And more importantly, sharing those mistakes helps others avoid repeating them.

The biggest issue she still sees today

Despite all the changes in PPC, one problem keeps showing up: tracking.

Broken implementations, over-reliance on micro conversions, and poor setup in tools like Google Tag Manager are still common.

In a world of smart bidding and automation, bad data doesn’t just limit performance — it actively misleads it.

Without clean tracking, even the best campaigns will fail.

AI won’t fix average marketing

Veronika is clear on one thing: AI is not a shortcut to better performance.

If you feed it average data, you’ll get average results.

Too many marketers rely on AI tools to analyse accounts without first understanding what needs to be improved. But AI can’t create differentiation — it can only optimise what’s already there.

Standing out still requires human thinking, strategy, and creativity.

The mindset that matters now

The biggest takeaway isn’t tactical — it’s mental.

Don’t aim for perfection. Don’t blindly follow recommendations. And don’t assume tools will do the thinking for you.

Instead, trust your instincts, test your ideas, and accept that mistakes are part of the process.

Because in performance marketing, the real risk isn’t failing — it’s playing it safe and blending in.

💾

A “perfect” PPC account delivered zero revenue—until one critical mistake revealed what was really going wrong.

Google Ads surfaces Tag Manager controls inside its interface

Why Google Ads auctions now run on intent, not keywords

Google appears to be pulling parts of the Google Tag Manager interface directly into Google Ads — a move that could simplify how advertisers manage tracking and tags.

What’s happening. Advertisers are spotting a new “Manage” option inside the Data Manager section of Google Ads that opens Tag Manager controls without leaving the platform.

The update was first shared by Marthijn Hoiting and Adriaan Dekker, who posted screenshots showing Tag Manager elements embedded within the Google Ads environment.

Why we care. Tag setup and troubleshooting have long been a friction point for advertisers, often requiring multiple tools and technical handoffs.

Bringing Tag Manager functionality into Google Ads could reduce that complexity — especially for smaller teams or advertisers without dedicated dev support.

Zoom in. Inside the Data Manager interface, users can see connected data sources (including Tag Manager) and trigger management actions directly from within Google Ads.

That suggests Google is moving toward a more unified measurement workflow, where tagging, data connections and campaign setup live closer together.

Between the lines. This aligns with Google’s broader push to simplify measurement and improve data accuracy — particularly as privacy changes and signal loss make clean tracking more critical.

It also mirrors recent efforts to make tagging more accessible without heavy technical setup.

What to watch:

  • Whether full Tag Manager functionality gets embedded or remains partial
  • How this impacts workflows between marketers and developers
  • If this becomes the default way to manage tags for advertisers

Bottom line. Google is quietly reducing the gap between campaign setup and measurement — bringing tagging closer to where ads are actually managed.

First seen. This update was shared by Adrian Dekker on LinkedIn, who credited Data and Analytics specialist Marthijn Hoiting for spotting it.

Google to no longer support FAQ rich results

Google will no longer support FAQ rich results as of May 7, 2026. This means you will no longer see FAQ rich results in the Google Search results going forward.

Plus, Google Search Console will stop reporting on FAQ structured data.

What Google said. Google posted a note at the top of the FAQ structured data developer documentation saying:

FAQ rich results are no longer appearing in Google Search. We will be dropping the FAQ search appearance, rich result report, and support in the Rich results test in June 2026. To allow time for adjusting your API calls, support for the FAQ rich result in the Search Console API will be removed in August 2026.

Remove code. You can remove the FAQ structured data from your code, if you want but you can also leave it. Other search engines may be able to continue to process it and use it for their own purposes.

Why we care. Rich results have helped web pages with click-through rates and get more traffic. FAQ rich results may have helped as well. But that is now no longer supported.

Keep an eye on your pages with FAQ structured data to see if your traffic from Google is impacted or not.

How to run prompt-level SEO experiments for AI search

How to run prompt-level SEO experiments for AI search

As LLMs continue to grow, optimizing brand visibility in AI-generated responses is becoming increasingly important. Consumers are turning to these models for answers, recommendations, recipes, vacations, and nearly everything else imaginable.

But what happens if your brand isn’t included in those responses? Can you influence the outcome? And what are some proven ways to improve your brand’s inclusion and visibility?

That’s where structured experimentation comes in. Prompt-level SEO requires more than assumptions or one-off wins. It requires repeatable testing frameworks that help isolate what actually influences LLM responses.

Build prompt-level SEO tests with a hypothesis framework

There are countless recommendations on how to improve your LLM presence. Experimentation is key to discovering what works for your industry and brand.

Hypothesis-driven testing is the way we structure these tests for our brands. It breaks things down in a structured way that can be replicated across tests and situations.

This framework creates a common approach to testing and helps you quickly understand the test and its outputs. The structure consists of three main pieces: if, then, because.

  • If: This part provides the hypothesis: what is the test action?
    • “If we include more detailed product specifications in our content.”
  • Then: What will happen once the “if” section is completed? The outcome.
    • “Then we’ll see our brand get included in more product-specific prompts.”
  • Because: This is why you believe this will occur. What is the theory behind this test?
    • “Because LLMs value detailed and specific information in their prompt responses.”

This framework requires some basic fundamentals that ensure you’re thinking through the test. It also allows you to go back later and validate whether you have tested these specific elements in the past and what the premises, theories, and outcomes were. 

This helps because, as things change, the test elements may still be valid simply because the world shifts — changing the “because” section.

Key considerations before running prompt-level SEO tests

Before we get to the recommendations for testing best practices, here are some considerations when running these tests:

  • Model updates: These models are updated constantly. As some models move from 4.1 to 4.2, it’s time to revisit those results. How did the model change the inputs and outputs?
  • Prompt drift: Have you ever run the exact same prompt twice in a day or on consecutive days? Often, the results change. Therefore, running the prompt more than once and on consecutive days to evaluate the outcome is important to get a true baseline. This is no different from personalized search results. Brands get comfortable with the variance, but some averages surface and become the benchmark. Prompt testing works much the same way.

Now that you have the framework of the test, let’s think about the core elements of tests that can be used in prompt-specific testing.

How to isolate variables: A methodological approach

Designing a reliable prompt-level SEO experiment requires isolating a single causal variable. This is crucial for confidently attributing changes in LLM response inclusion or position to a specific action.

1. Content changes

When testing content modifications, the variable must be surgical. A common pitfall is changing too much at once (e.g., updating a product description and the page’s schema).

  • Best practice — The single-paragraph swap: Focus on modifying a single, targeted piece of text on the page, such as a product description, FAQ answer, or a specific feature bullet point.
  • Methodology: For true isolation, implement A/B testing with a control page containing the original content and a test page containing the modified content. The prompt should be designed to target the specific information you changed. Measure the brand’s inclusion rate and position-in-response over a defined period (e.g., seven days – keep in mind these models are moving at a variety of speeds. This work, much like SEO, isn’t a microwave, but more like an oven).

2. Structured data

Structured data (schema) provides explicit signals to both search engines and LLM ingestion layers. Testing this requires treating the schema update as the only change to the page.

  • Variable isolation: Test adding new properties (e.g., brand, model, and offer details) without altering the visible HTML text. This isolates the impact of the machine-readable layer.
  • Specific experiment — FAQ schema: A highly effective experiment is adding FAQ schema to pages that already have Q&A sections in their HTML, isolating the effect of the explicit schema markup on LLM ingestion. Our work with brands has demonstrated that adding FAQ schema to pages with Q&A sections makes those sections easier for LLMs to ingest.

3. Before-and-after prompt testing

This process involves establishing a stringent baseline, making the change, and then repeating the prompt query. This is an essential control method in lieu of true A/B testing on the LLM itself.

Protocol

  • Phase 1 (baseline): Execute a set of 5-10 target prompts daily for seven consecutive days to establish a true average of inclusion and position-in-response, accounting for prompt drift.
    • Action: Deploy the isolated change (e.g., content or schema update).
  • Phase 2 (measurement): Re-run the exact same set of prompts daily for the next seven days.
    • Analysis: Compare the average inclusion rate and position of Phase 1 versus Phase 2. This method is central to initial presence score analyses, such as using three buckets of 25 keywords and prompts for a total of 75 queries.

Get the newsletter search marketers rely on.


Encouraging reproducible experiments

With the speed of model evolution and the lack of detailed model insights, it’s difficult to ensure reproducibility of results. However, the goal is to move beyond simple “it worked once” findings to build a durable methodology.

Mandatory frameworks

Ensure every test is documented using the “if, then, because” hypothesis structure. This archives the premise, action, and expected outcome, allowing future teams to quickly validate whether a test remains relevant as LLMs evolve.

Technical integrity

  • Version control: Document the specific model and version used for testing (e.g., “Gemini 4.1.2”). This allows for easy comparison when a model update occurs.
  • Prompt libraries: Maintain an organized, time-stamped repository of the exact prompt queries used for baseline and measurement phases. This repository should track inclusion rate, position-in-response, and sentiment/framing for each query.

Infrastructure consistency

Define the testing environment (e.g., clear browser cache, no login state) and, where possible, use APIs or synthetic testing platforms to remove the impact of personalization and location bias, which is analogous to controlling for personalized search results in traditional SEO.

Moving beyond one-off wins in AI search

The key to prompt-level SEO is rigorous methodology. By adopting a hypothesis-driven approach, surgically isolating variables (content, entities, schema), and establishing strict before-and-after testing protocols, you can confidently move past speculation. 

The path to influencing LLM responses is paved with controlled, documented, and reproducible experiments.

SEO’s new goal in 2026: Recognition, not rankings

SEO’s new goal in 2026: Recognition, not rankings

For the best part of two decades, we had a clear and accepted mandate: Get your brand to the top of the search results page. The problem was understood, the success metrics were agreed upon, and a supporting ensemble of tools, talent, and tactics was built around solving it.

Rankings were the scoreboard. Position 1 meant visibility. Traffic followed, and a brand’s value seemed to follow it.

It’s this core premise that is now under serious renegotiation with the search landscape changing more in the past 18 months than in the previous 10 years combined: 

  • AI Overviews are absorbing queries that previously generated clicks. 
  • AI/LLM platforms are becoming the first stop for research and decision-making. 
  • Zero-click is no longer a niche concern. It’s increasingly becoming the default.

What’s required now isn’t a new set of tactics. It’s a fundamental change in mindset. This is the SEO problem of 2026. Let me show you why recognition is your new goal and how to earn it.

The world changed faster than we did

SEO has always been a discipline that chases the algorithm.

We reverse-engineered signals, built strategies around them, and then scrambled to adapt when they shifted. Yes, there has always been the argument that if you cater your content to humans, you typically perform well.

That said, there have been obvious shifts in the types of content that resonate with the algorithm and those that don’t, dictated by changes to the Google algorithm at specific times.  

It was never a perfect or complete system; anyone who worked through (or has since learned about) the Panda and Penguin years will tell you the algorithm was always a shifting target. But the fundamentals remained stable. Aim to rank well, get found, win.

The shift we’re living through now isn’t a Google core update. Instead, we’re experiencing a structural change in how information is surfaced, interacted with, and ultimately trusted. 

AI has fundamentally transformed what searchers see

There’s a mental model baked into traditional SEO: If you’re at the top of the SERP, you’re visible. That model was accurate for a long time. But it isn’t now.

AI and LLM platforms — whether Google’s own generative features or external tools like ChatGPT, Perplexity, or Claude — don’t crawl the SERP and pick from the top results. They build understanding from training data, citation patterns, entity relationships in knowledge graphs, and signals about who is genuinely considered authoritative on a given topic. 

A high-ranking page can be largely invisible to these systems if the brand behind it hasn’t established recognition and preference (a.k.a., the quality of being known, cited, and trusted beyond its own domain).

Dig deeper: Entity-first SEO: How to align content with Google’s Knowledge Graph

Ranking no longer equals visibility

If your instinct is to treat it like another algorithm update, to find the new signals, maybe even game the new system, you are missing how dramatically the search landscape has shifted.

Think about it this way:

  • A brand can rank No. 1 for vital trophy keywords.
  • Their domain authority is strong.
  • Their technical SEO is clean, meeting best practices.
  • Their content team publishes weekly.
  • Their link profile is healthy.

By every traditional metric, this brand would be seen as winning. And yet, when their potential customers ask an AI or LLM platform which brand solutions to consider in their category, this brand doesn’t come up.

When Google’s AI Overview summarizes the landscape, it cites three competitors. When a journalist writes a roundup and asks an LLM to help research it, this brand is invisible.

They rank. Yet it’s as if they don’t exist — because ranking well doesn’t solve for recognition.

Even if the dashboards still report rankings and the tools still track positions one through ten, optimizing for a metric that’s losing its meaning is no longer a viable strategy.

User behavior is also changing

A growing share of search journeys now end before a user ever clicks a result, because they get the information they need without having to click through.

AI Overviews takes the majority of the headlines for this, but there has also been a huge shift in the SERP towards featured snippet expansions. This is further amplified by the adoption of  LLM-powered assistants that surface direct answers outside the traditional search environment.

Meanwhile, queries are increasingly conversational, with more and more users asking AI tools questions the way they’d ask a knowledgeable colleague or trusted friend, and they’re expecting thorough, contextualized, and personalized answers rather than a list of blue links.

In this world, the question your SEO strategy needs to answer is no longer “how do I rank?”, it’s “Is my brand the preferred option in the conversation?” 

And these are absolutely different questions that require different answers.

How AI ‘chooses’ brands to recognize

Think about how an AI model decides what to say when someone asks, “What’s the best CRM for a small B2B team?” It doesn’t run a Google search and summarize the top result. It draws on patterns it sees throughout the knowledge at its disposal:

  • Training data.
  • Industry publications.
  • Reviews.
  • Expert commentary.
  • Forum discussions.
  • Solution comparisons.

The brands that appear in that answer are the ones that have accumulated recognition across the broader landscape, not just the one that ranks.

This is becoming an invisible tax on brands that have focused exclusively on rankings. They may dominate the SERP today. But in the AI-mediated version of that same query, they’re absent.

“Recognition” doesn’t have to be a vague brand concept. It has specific, measurable components. Let’s break them down.

Brand awareness across the search universe

This is the most basic layer. Does your brand name appear, in context, across the search universe?

Not just on your own domain, but in industry publications, analyst reports, user reviews, forum discussions, podcast transcripts, and news coverage. You must also consider where audiences are spending time, because they are developing brand awareness on social-search destinations, too.

AI and LLM platforms are increasingly trained on and drawing from the wider internet when answering questions. Certain domains are massively outperforming others in terms of citations from these platforms, Semrush found. 

If your brand is only present on your own website, you’re harder to find and aren’t in the platforms’ go-to sources.

Topical authority 

This goes beyond keyword rankings. Topical authority means that when a given subject area comes up, your brand is consistently associated with it — not just by Google’s algorithms, but by writers, analysts, content creators, and communities. 

It’s the difference between a site that covers a topic and a brand that owns the conversation in people’s minds who discuss it.

The signal here isn’t domain authority. It’s authority, trust, and relevance (a.k.a., preference). You are asking, “Does our brand appear alongside the recognized leaders in our space?” and “When people discuss an essential topic, are we in the conversation?”

Dig deeper: Why topical authority isn’t enough for AI search 

Entity clarity

This is the most technical layer and the one most often overlooked. An “entity” in SEO terms is a clearly defined, consistently described “thing.” This could be: 

  • Your company.
  • Your product.
  • Key voice or person. 
  • Key topic or conversation.

Put simply, it’s something that knowledge systems can reliably identify and categorize.

If your brand’s description varies across your site, your Wikipedia page (if you have one), your Google Business Profile, your Crunchbase entry, and your LinkedIn page, you create ambiguity for every system.

This is as confusing for your human audience as it is for the AI/LLM layer trying to understand who you are and what you do.

Entity clarity means having a canonical, consistent answer to the questions:

  • What is this company?
  • What does it do?
  • Who does it serve?
  • How is it different?

Brands with strong entity clarity get pulled into knowledge graphs. They get cited. They get recognized.

Dig deeper: From links to brand signals: The new SEO authority model

Get the newsletter search marketers rely on.


6 things to get you started on the path to recognition

True recognition cannot be built overnight. Instead, your focus is on engineering discovery that develops recognition over time. With that in mind, here are six ways to begin the process:

1. Audit your entity presence

Go and look at how your brand is described in the places that matter: 

  • Google’s Knowledge Panel. 
  • Wikipedia (if applicable). 
  • Wikidata.
  • Social media conversations.
  • Key person/business LinkedIn profiles.
  • Your own “About” page. 

You should be asking if the messaging here is consistent. If your homepage describes you as “an AI-powered B2B sales platform” while the content you discuss and share on your YouTube says “CRM software for startups,” you have an entity problem. 

2. Fix the inconsistencies

Write a canonical description of your company — one clear, accurate, jargon-free paragraph — and work to get it reflected everywhere. Then mold the content format to the needs of the various platforms you want to show up on.

Alongside this, decide which conversations are most important to your brand and consistently look to own these topics. This is part engineering discovery, but it’s also developing your entity and the topics that contribute to that.

Dig deeper: Why entity authority is the foundation of AI search visibility

3. Create citable assets

There’s a difference between content that ranks on a SERP and content that gets cited.

Ranking content is optimized around keywords, and too often, content has become homogenized in trying to meet the expectations of an algorithm so that you can rank.

Citable content, on the other hand, is original, specific, and useful enough that other people (and AI/LLM platforms) want to reference it. Citable content is strong enough that your audience feels like they miss an integral part of a conversation by not featuring or citing the asset or source. 

Think original research and surveys, clear and ownable frameworks or methodologies, definitions that don’t yet exist clearly in your space, and data that journalists, analysts, creators, and bloggers actually want to quote or build upon.

If the only content on your site are search-optimized blog posts, ask yourself: 

  • Is there anything here that a writer at a key niche publication or a researcher at a relevant public body would want to cite? 
  • Is there anything that a content creator would want to build upon or explore further? 

If the answer is no, that’s the gap to close.

4. Build off-site recognition deliberately

This isn’t about traditional link building. It’s about building presence in the right conversations, be that industry publications, podcasts, analyst briefings, conference talks, social content, or community forums.

Every time your brand name appears in a meaningful context outside your own domain, you’re building the recognition signal that AI and LLMs draw on and that resonates with humans in the journey.

Prioritize quality of context over volume. A single, substantive mention in a respected publication is worth more than fifty low-quality directory listings.

5. Optimize for clarity and intent

A keyword is a moment. Intent is a journey. Traditional SEO has trained us to think in snapshots: a user types a query, we rank for it, we win.

But a real buying journey in 2026 looks nothing like that. It might start with a conversational AI query, move through a Reddit thread, surface a YouTube comparison, hit a review platform, and only then arrive at a branded search. The keyword at any single point is almost beside the point.

What matters is whether your brand shows up meaningfully across the full arc of that journey — not just at the moment someone is ready to convert.

Start by mapping intent honestly. 

  • What is someone actually trying to understand when they enter your space? 
  • What does the journey from problem-aware to solution-decided look like for your customer? 

Then audit where your brand is present, absent, or ambiguous across it.

The second part is clarity. As search becomes more conversational and AI-mediated, the brands that get surfaced are those that clearly communicate what they do, who they serve, and why they’re the right choice — consistently across every touchpoint. 

Vague positioning might survive a keyword-match algorithm. It won’t survive a language model deciding whether your brand is the right answer to a specific human question.

Be specific and consistent. Make sure your description holds up whether someone finds you on your own site, in a third-party review, or in an AI-generated summary.

Dig deeper: If you can’t say what problem your brand solves, AI won’t either

6. Start measuring recognition

Your current reporting probably tracks keyword rankings, organic traffic, and backlinks. I would argue that this should continue, but there should be a shift in the importance of these metrics versus the following signal:

  • [Brand] search volume: Are more people searching directly for you?
  • [Brand] + [Intent or Keyword]: Are more people associating you with specific topics?
  • Unlinked mentions: Is your brand name appearing in content that doesn’t link to you?

You can then use the following alongside these and begin to further understand if your brand is being recognized:

  • Increase in referral traffic.
  • Increase in direct traffic.
  • Increase in quality of traffic (measured in longer sessions, per user increase in pages viewed, purchases earlier in the journey).

This will then allow you to look towards the most important SEO metric there should ever be: revenue. Especially if you can assess and report on the development of average order value (AOV) and lifetime value (LTV) or the specific values of the pages that have seen higher traffic because of an increase in unlinked mentions and/or brand searches.

When you begin to think about these considerations, the most important shift isn’t adding new metrics to your dashboard. It’s changing what you treat as the primary signal.

Branded search volume, specifically branded search paired with intent, is one of the clearest indicators of genuine preference in the user journey and also the competitive landscape.

Someone searching for you by name, combined with a buying signal, isn’t discovering you. They’ve already decided you’re worth considering. That’s recognition doing its job.

The goal is to grow that signal deliberately, and then make sure that when someone arrives with that intent, you meet it head on.

A branded intent search that lands on a generic homepage is a wasted moment. These users are telling you exactly what they need. Your job as an SEO in 2026 is to have already built the page, the answer, the experience that closes the gap.

The supporting metrics — unlinked mentions, referral traffic, direct traffic, AOV, LTV — all tell you whether recognition is compounding into something commercially meaningful. 

And that’s ultimately the conversation that needs to happen in every boardroom and strategy session: Recognition isn’t a brand vanity play, it’s a revenue strategy.

Rankings as the primary focus have gotten us so far. Recognition, with a view and monitoring mindset on the signals identified here, is what takes us, the SEO’s role and importance to brands further than ever before.

Get ready for a longer game with a bigger potential to win

Here’s the uncomfortable truth about the recognition-first approach: It’s slower.

You can’t optimize your way to being well-known in the same way you can optimize your way to a ranking — and I think it’s what’s most intimidating to SEOs.

Recognition compounds over time, developed through consistent presence, genuine authoritativeness, relevance, and the slow accumulation of trustworthiness. But that’s also what makes it durable. 

Rankings fluctuate with every algorithm update, and the value of a No. 1 ranking is seemingly shrinking with every update due to the continued and increasing number of SERP features and AI/LLM integrations into the SERP.

Recognition, though, once established, is much harder to displace. To own AI-mediated search in the coming years, spend this period building something that AI systems — and the increasing number of humans utilizing them — genuinely recognize as authoritative.

The No. 1 ranking is a vanity metric if it ends up below the fold, stuck under a SERP of AI/LLM integrations and SERP features — ultimately ensuring nobody knows who you are.

Start building recognition. Your appearance in those top-of-page SERP features and AI/LLM integrations will follow.

Why intent alignment matters more than perfect technical SEO

Why intent alignment matters more than perfect technical SEO

Improving technical SEO on your site may not be enough to move the needle these days. 

Once a site reaches technical parity with its competitors — the point at which a proper infrastructure no longer gives you an advantage — Google shifts its ranking criteria toward relevance. And relevance is determined by aligning with search intent. 

Let’s talk about how to make your site more relevant.

Why an intent mismatch may be suppressing your site’s performance

An intent mismatch occurs when the copy on a page doesn’t match what the user is expecting to find on it. This happens when pages aren’t relevant to a topic or have mismatched signals.

This generates poor behavior signals — users click through from a SERP, see that the page doesn’t answer their need, and leave. Google interprets these signals as evidence that the page doesn’t satisfy the query. 

This can lead to a decline in rankings, which means fewer users see the page, which means the behavioral signals worsen. It’s a feedback loop that technical SEO alone can’t resolve.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Technical SEO improvements may no longer make a difference

In the early stages of implementing an SEO strategy, the needle can move quickly. If a site is operating below the technical baseline needed for Google to properly evaluate it, applying simple fixes — such as fixed crawl errors, resolved duplicate content issues, improved page speed, and adding schema — can produce big gains.

However, after these changes, your site’s technical foundations are now comparable to those of your main competitors — you hit a ceiling. Now, Google isn’t ranking pages based on which ones it can access the easiest, but on those that best satisfy the user’s query. 

Your technical infrastructure, or lack thereof, no longer disadvantages you, but now the rules of the ranking game have changed.

This is where intent alignment becomes the primary lever for improvement. 

Signals that reinforce search intent

Elements that have an impact on a page’s intent, and how Google decides whether the intent matches the page, include: 

  • Click-through rate.
  • Engagement signals.
  • Core Web Vitals.
  • Schema type.
  • Internal linking anchor texts.
  • URL structure.

Click-through rate (CTR)

Click-through rate can be determined by your title tag, meta description, URL structure, and schema. It is also measured against intent. 

For example, if your title tag is optimized for a keyword but doesn’t match the user’s query, your CTR will drop. Google treats a low CTR as a relevance signal and adjusts rankings accordingly.

Engagement rate

Time-on-page, scroll depth, and interaction rates can suffer when intent doesn’t align with a page. 

If a user is searching to purchase something but lands on a how-to guide, they may exit that page within seconds. The same can be said of a user looking for an emergency plumber who lands on a page without a phone number. 

Engagement signals feed directly into how Google evaluates a page’s usefulness for a given query.

Core Web Vitals (CWV)

The three Core Web Vitals — Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — determine page loading speed.

A transactional page that loads slowly suffers more than a slow-loading informational article. With the transactional page, the user is ready to buy and their patience is minimal, whereas a reader in research mode can tolerate a longer wait. 

CWV thresholds matter everywhere, but their impact on conversion and bounce behavior is greater on high-intent pages. 

Schema type

Schema markup tells Google explicitly what type of content is on a page. Generally:

  • Article/HowTo is informational.
  • Product is transactional.
  • FAQ is informational and commercial.
  • Local business/event is navigational.

When schema type contradicts the content on a page, Google gets a conflicting signal, resulting in a traffic drop.

Internal linking anchor texts

The anchor text of internal links tells Google about the page that’s being linked to, including its intent. 

If a transactional landing page receives internal links with informational anchor text — “learn more about X,” rather than “get a quote for X” or “buy X” —  the intent signal Google receives about that page’s purpose gets diluted.

URL structure

Google uses URL patterns to infer page type. 

For example, URLs sitting under /blog/ are treated with informational bias. A product or service page buried under a blog path fights against that structural expectation, regardless of its content, and it may not rank well. 

Cannibalization and canonicalization

If your site has multiple pages targeting the same keyword but with different intents, neither is likely to rank well. They compete against each other and dilute the signal Google receives. 

To fix, use canonical tags to clearly signal which page is the preferred one for a given keyword, consolidate or redirect competing pages where appropriate, and ensure your internal linking reinforces the canonical choice.

Get the newsletter search marketers rely on.


How to fix intent misalignment

Here’s an example of a common intent mismatch and some steps to audit your content and fix it. 

What an intent mismatch looks like

For example, if a user searches for “financial analysis software,” they’re looking to buy software. The keyword phrase is highly transactional. 

But if your site targets this keyword phrase for an informational blog post that explains how a person can complete a financial analysis report themselves, this creates a mismatch.

The user is looking for a product that does the analysis for them, which means they want to compare features, understand pricing, see integrations, or book a demo. 

The keyword phrase should be applied to a dedicated product or landing page that clearly outlines functionality, benefits, use cases, and pricing. This would align more with the user’s needs, resulting in more inquiries, leads, and conversions.

Identify the intent of your pages

To fix intent mismatches, to start, compile a list of the top performing keywords that best describe your business and manually check the Google rankings for each.

This initial research will tell you exactly what type of page and copy you should have for these keywords. For example:

  • Knowledge panels, AI Overviews, and People Also Ask boxes usually appear for informational searches.
  • Paid results usually suggest commercial intent.
  • Shopping feeds suggest a transactional keyword.

Next, add the keywords to a spreadsheet and add a column for intent. Work down the list, adding whether you think the page is informational, commercial, transactional, or navigational. 

You can then create another column that states the type of page that will rank well: 

  • Informational: Blog or resource content.
  • Commercial: Service or landing pages.
  • Transactional: Collection, category, or product pages.
  • Navigational: Brand, specific service, or specific location pages.

See what your competitors are doing

Research your competitors’ pages for the keywords you’re targeting. Analyze and note what they have that your pages don’t have.

They may have:

  • Tables.
  • Comparisons.
  • Calculators.
  • Tools.
  • FAQs.
  • Reviews.
  • Step-by-steps.
  • Images.
  • Videos.
  • And more. 

Consider how to improve your own pages to match theirs. 

Measure your page’s performance based on intent metrics

Once you’ve made changes to your pages, track their performance to see whether they helped. Look at:

  • Clicks and impressions for intent-aligned keywords.
  • Rankings for core target queries.
  • Time on page.
  • Conversion rates, particularly those of previously underperforming pages.

Technical SEO still plays a decisive role

Technical SEO is still important, especially for complex, enterprise-scale sites. Here are some ways that technical SEO work can still move the needle significantly, in ways that content optimization alone can’t.

Crawl budget management

An ecommerce site with thousands of URLs can have its crawl budget consumed by low-value pages before its allotment reaches high-intent category and product pages that you want to rank. 

Cleaning up low-value pages is purely technical work and will ensure your crawl budget goes toward pages that count. 

International site architecture

Technical SEO is crucial when handling international sites that contain pages in multiple languages. 

A keyword that’s purely informational in one market may be transactional in another, reflecting different buyer behaviors and levels of market maturity. Hreflang implementation, regional subdomain or subdirectory structures, and URL strategies all affect whether the right page, with the right intent, reaches the right audience.

Log file analysis

A log file analysis will reveal which pages Google is successfully crawling and how frequently they are. For sites with intent alignment problems, Google often spends a disproportionate amount of attention crawling low-value or misaligned pages, while high-intent pages are visited infrequently. 

For small sites with a clean structure and limited number of URLs, technical SEO can reach parity quickly, so the need to shift to intent alignment happens sooner. For large, complex sites, technical and intent work often need to happen in parallel.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Technical SEO and intent need to work together

Technical SEO is still important today — think of it as a foundation that the rest of the site sits on. Pages that can’t be crawled, indexed, or rendered correctly will be unable to rank, regardless of how well their content matches user intent.

Think of intent alignment as the ceiling — it’s what determines how high a technically sound page can rank, and whether it converts the traffic it earns. 

Every page on a site should have a clearly defined intent, expressed in the right format, with the right content type. And they should also be supported by technical signals, be it schema, URL structure, relevant anchor text, etc., so that the page’s intent is constantly reinforced. 

Microsoft Ads expands custom columns to include all conversion metrics

Microsoft Advertising is giving advertisers more flexibility in reporting, with custom columns now supporting all conversion metrics — a move aimed at deeper, more tailored campaign analysis.

What’s happening. According to Microsoft’s product liaison Navah Hopkins, advertisers can now build custom metrics using the full range of conversion data available in the platform.

This includes both all conversions and primary conversions, allowing marketers to align reporting more closely with their specific goals.

Why we care. Standard reporting often doesn’t reflect how businesses actually measure success. By expanding custom columns, Microsoft is enabling advertisers to create metrics that better reflect their own performance definitions — whether that’s based on lead quality, revenue or blended conversion actions.

This is especially useful for advertisers managing multiple conversion types or complex funnels.

More control over performance metrics. Advertisers can now create custom columns using ratios and combinations of metrics that matter most to them — such as cost per qualified lead, blended CPA or conversion rate based on primary goals.

Revenue and ROAS calculations will also reflect the values set at the conversion goal level, giving more accurate insights tied to business outcomes.

Between the lines. This update signals a shift toward more flexible, advertiser-defined measurement — rather than relying solely on platform-standard metrics.

It also reflects ongoing demand for better reporting customisation as campaigns become more automated and complex.

What to watch:

  • How advertisers use custom metrics to guide optimisation decisions
  • Whether this leads to more consistent reporting across teams and stakeholders
  • If similar flexibility expands across other areas of the platform

Bottom line. Microsoft is giving advertisers more control over how they measure success — turning custom columns into a more powerful tool for campaign analysis.

AI Max vs DSA: Advertisers question control as Google responds

In Google Ads automation, everything is a signal in 2026

Advertisers are starting to push back on gaps in AI Max capabilities — particularly around landing page control — as Google continues its shift away from legacy Dynamic Search Ads (DSA).

What’s happening. In a LinkedIn exchange, digital marketing expert Gabriele Benedetti raised concerns about AI Max lacking the same level of URL-based targeting controls that DSA campaigns offered.

His point: DSA allowed advertisers to structure campaigns around website architecture — using categories, URL paths and page rules to guide where traffic lands. That level of control, he argued, is not yet fully replicated in AI Max.

Why we care. For many advertisers — especially those managing large or structured websites — aligning campaign structure with site architecture is key to performance. Losing granular control over landing destinations could impact relevance, user experience and ultimately conversion rates.

This highlights a broader tension in Google Ads today: automation vs control.

Google responds. Google Ads Liaison, Ginny Marvin responded, clarifying that AI Max does support several URL-based controls, including:

  • URL rules and combinations
  • Page feeds with custom labels
  • URL inclusions at ad group level and exclusions at campaign level

However, she acknowledged that not all DSA targeting rules are currently supported — such as “page contains” conditions.

Between the lines. Google is not removing control entirely — but it is reshaping how that control works. Instead of granular rule-building, advertisers are being pushed toward structured inputs like page feeds and labels that AI can interpret.

Migration reality check. For advertisers moving from DSA to AI Max, existing URL rules will carry over — but with limitations. Unsupported rules will remain active as read-only, meaning they’ll continue to function but cannot be edited.

That’s a temporary bridge, not a long-term solution.

What’s next. Google says it plans to expand controls further, including bringing content and title-based exclusions to the account level later this year.

This would complement AI Max’s existing “inventory-aware” features, which already exclude out-of-stock items automatically.

Bottom line. AI Max is evolving, but it’s not yet a full replacement for DSA when it comes to granular control — and advertisers are making that clear.

Dig deeper. Full discussion on LinkedIn.

Google AdSense removes browser back button trigger for vignette ads

Google is dropping the back button trigger for AdSense vignette ads on June 15, 2026 due to the new Google search penalty for back button hijacking. Google wrote, “Starting June 15, 2026, the browser back button will no longer trigger a vignette ad.”

What is changing. Google explained that the back button trigger will no longer work after June 15th. The “change will apply automatically for all publishers who have opted in to “Allow additional triggers for vignette ads” and will take effect across all supported browsers (including Chrome, Edge, and Opera).” Google added.

A Google spokesperson told me these same updates will apply to Ad Manager as well.

Why the change. Google explained that the Google Search team “recently introduced a new policy against “back button hijacking” — a practice where websites or scripts interfere with a user’s ability to navigate back to their previous page. To ensure our publishers remain compliant with these latest user experience and search quality guidelines, we are removing the trigger that shows a vignette ad when the user navigates backward from the suite of vignette ad triggers.”

This comes after the search community called this out to Google and Google is making the right change here. Of course, some publishers will not be happy because that trigger may have earned them a lot of money.

Why we care. If you currently have the allow additional triggers for vignette ads setting on with AdSense, keep in mind, one of the triggers, the back button trigger, will be disabled on June 15th. It may impact your earnings, but it will ensure that your site does not get penalized by the back button hijacking penalty.

Google adds AI-powered bidding and demand-led budgeting to Search and Shopping

When Google’s AI bidding breaks – and how to take control

Google is rolling out new AI-driven bidding and budgeting features across Search, Shopping and Performance Max — aimed at helping advertisers capture more demand without increasing manual effort.

What’s happening. Google is expanding its automation stack with updates like Journey-aware Bidding, Smart Bidding Exploration and demand-led budget pacing. Together, these changes are designed to help campaigns respond more dynamically to shifting consumer behaviour.

The focus: letting AI identify and act on opportunities advertisers may not see themselves.

Why we care. These updates aim to capture more conversions without increasing manual work, using AI to find new demand and optimise spend in real time. By improving how bids respond to full-funnel signals and how budgets adapt to peak demand, campaigns can become more efficient and less reliant on constant adjustments.

Ultimately, it’s about getting more value from the same budget while staying competitive in a fast-changing search landscape.

Smarter bidding gets more context. Journey-aware Bidding (beta) allows advertisers to feed more of the customer journey into optimisation, including non-biddable conversions. This gives Google AI a fuller picture of what leads to actual sales — not just initial actions like form fills.

At the same time, Smart Bidding Exploration is expanding beyond Search. Already delivering an average 27% increase in unique converting users, it will soon roll out to Performance Max and Shopping campaigns, helping advertisers tap into less obvious, incremental queries.

Budgets that follow demand. On the budgeting side, Google is building on its campaign total budgets feature, which allows advertisers to set spend across a defined period instead of relying on daily limits.

The next step is demand-led pacing — where AI automatically adjusts spend based on real-time demand, increasing budgets on high-opportunity days and pulling back during slower periods, without exceeding overall limits.

Advertisers using total budgets have already seen a reported 66% reduction in manual budget adjustments.

Why this is a big deal. Budget management has historically been one of the most manual parts of campaign optimisation. By automating pacing, Google is reducing the need for constant monitoring while aiming to improve efficiency.

What to watch:

  • How much control advertisers are willing to give up for automation
  • Whether incremental gains from exploration translate into profitable growth
  • How transparent these systems remain as they scale

Bottom line. Google is directing advertisers to AI to handle both bidding and budgeting — shifting the advertiser role from manual optimisation to guiding inputs and trusting the system to find growth.

5 JavaScript SEO lessons from top ecommerce sites

5 JavaScript SEO lessons from top ecommerce sites

JavaScript SEO should be a solved problem by now. It isn’t.

Ecommerce sites keep hitting the same crawling, rendering, and indexing issues they were five years ago, now stacked on top of headless builds, AI-powered recommendations, and frameworks that can hide critical content from Google.

These top ecommerce players have figured out how to ship fast, modern JavaScript without sacrificing organic visibility. Here are five lessons worth stealing.

1. Chewy uses JavaScript for UX

Chewy is one of the largest online retailers of pet food and supplies in the U.S. They use Next.js, a React framework for building websites with built-in support for server rendering, static generation, and full-stack development features.

That means you can put important content in the initial HTML response without relying on client-side JavaScript.

Let’s look at a product page like the Benebone Wishbone Chew Toy.

Chewy product page

Navigate to View Page Source and you’ll see the product title, description, pricing, reviews, Q&A, and breadcrumb navigation all present in the initial HTML. Googlebot can access it on the first pass, without waiting for rendering.

Chewy page source

That’s important because if a web crawler like Googlebot encounters issues rendering your page, the important content can still be parsed on the first crawl. With the rise of AI chatbots, some of which still don’t render JavaScript, this has become even more important.

Not everything needs to be in the initial HTML, though. Without client-side JavaScript, the page would feel static and clunky.

Take the “Compare Similar Items” carousel. It’s loaded client-side, primarily there for shoppers. The internal links could offer some SEO benefit, but they’re not critical for indexing this page the way the title, description, and pricing are.

Chewy similar items carousel

Chewy gets this balance right. The content that matters most for indexing is available on initial load. Client-side JavaScript enhances the experience rather than delivering the content that needs to be indexed.

2. Myprotein makes navigation crawlable

Myprotein sells supplements, nutrition products, and some fitness apparel.

Their site is built on Astro, a content-first framework using Islands Architecture to ship zero JavaScript by default while supporting components from React, Vue, or Svelte.

Myprotein’s navigation is the part worth studying. It’s an important SEO area for ecommerce sites, and they get it right.

Myprotein navigation

View the source on any Myprotein page and the navigation links (categories, dropdown items, and footer links) are all in the initial HTML response. Astro makes this possible through its island architecture.

Myprotein source code

The navigation ships as an interactive island, meaning Astro will hydrate it with JavaScript as soon as the browser is ready. But JavaScript makes the flyout menus interactive. It doesn’t create them.

These links are also proper <a> elements with href attributes, which is what crawlers like Googlebot need to discover and follow links. Avoid using JavaScript click handlers to simulate navigation, such as:

<div onclick="navigate(item.slug)">Clear Protein Drinks</div>

A crawler won’t follow that. Use a standard anchor element instead:

<a href="https://us.myprotein.com/c/nutrition/protein/clear-protein-drinks/">Clear Protein Drinks</a>

Not every site gets this right. When navigation depends entirely on client-side rendering, there’s a window where it’s invisible or empty.

Googlebot processes JavaScript in a separate rendering pass that can lag behind the initial crawl, which can mean delayed discovery of internal links critical for crawl efficiency and link equity distribution.

3. Harrods embeds structured data in the HTML

Harrods is a luxury department store selling fashion, beauty, and homeware.

Their site is built on Nuxt, a Vue framework for building websites with built-in routing, server rendering, and static generation, plus an opinionated project structure.

Their structured data is delivered in the initial HTML response. View the source on any product page and you’ll find structured data inside a <script type="application/ld+json"> element. The Product schema includes the product name, images, description, brand, and an Offer with price, currency, availability, and seller.

Harrods page source

JSON-LD is the format Google recommends for structured data, and because it’s in the HTML response, Google can parse it on the first crawl pass without needing to render the page.

On JavaScript-powered sites, structured data can easily become a client-side dependency. If a framework fetches product data in the browser and generates JSON-LD from the response, that structured data only exists after JavaScript executes. The same is true for structured data injected through Google Tag Manager.

If markup is only added after the page loads, Google has to render the page to find it. Google has noted that dynamically generated Product markup can make Shopping crawls less frequent and less reliable, which matters when prices and availability change often.

By serving that structured data in the HTML directly, Harrods avoids this risk entirely.

Get the newsletter search marketers rely on.


4. Under Armour handles faceted navigation with JavaScript

Under Armour is a global sportswear brand selling athletic apparel, footwear, and accessories. Their site is built on Next.js, the same React framework Chewy uses.

A good place to see their JavaScript SEO in action is on category pages, where filters need to feel fast and interactive for shoppers, and be crawler-friendly.

Let’s look at the men’s shoes category page. When you apply a filter, say, selecting size 10, the product grid updates instantly without a full page reload. That’s client-side JavaScript updating the grid.

Under Armour porduct page

But the URL updates too. After selecting the filter, the URL becomes:

  • https://www.underarmour.com/en-us/c/mens/shoes/?prefn1=size&prefv1=10

A shopper can copy that URL, send it to a friend, or bookmark it, and land right back on the same filtered view.

Notice what the URL isn’t:

  • Not a hash fragment (#size=10), which doesn’t get sent to the server and is ignored by Google.
  • Not a mess of bracketed query strings (?filters[0][size]=10).
  • Not a dynamic route artifact like /shoes/[category]/ leaking into the live URL.

It’s a clean, readable query string with named parameters.

Under Armour is using the Next.js router to update the URL as filters change. Under the hood, it wraps the browser’s History API and uses the pushState() method to update the address bar without a reload.

When someone visits that same URL directly, the page loads with the filter already applied.

5. Manors Golf loads third-party scripts

Manors Golf sells golf apparel. Their site runs on Hydrogen, Shopify’s React-based framework for headless storefronts.

Hydrogen defers its own application scripts automatically since they load as ES modules. However, third-party scripts are the developer’s responsibility. On an ecommerce site, that can be a long list: reviews, chat, personalization, pixels, recommendations, payment scripts.

That matters for SEO in two ways. Render-blocking scripts hurt Core Web Vitals, most directly Largest Contentful Paint (LCP). They also give Googlebot more work to render the page, so it may get processed less reliably.

An external script (<script src="...">) without async or defer blocks HTML parsing. Async fetches in the background and runs when ready. Defer waits until parsing finishes.

Manors loads external scripts from 12 third-party domains, including Klaviyo, TikTok, Microsoft Clarity, and Gorgias.

A look at the Elements panel shows them all loading with async:

Manors async attribute

By loading third-party scripts with async, Manors keeps them from blocking the initial render. That protects LCP and reduces the work Google’s Web Rendering Service (WRS) has to do.

The balance between interactivity and crawlability

The issue isn’t that you’re using JavaScript. It’s what you’re using it for.

Googlebot can process JavaScript, but it’s slower and less reliable than reading HTML. The more your core content, structure, and navigation depend on JavaScript, the more room there is for things to go wrong.

The sites in this article all use JavaScript to enhance the experience rather than deliver it. Do that, and you won’t have to choose between a good user experience and good SEO.

8 GEO metrics to track in 2026

8 GEO metrics to track in 2026

Search visibility no longer starts and ends with rankings. AI-driven search has changed where discovery happens — across Google, ChatGPT, Perplexity, and beyond.

Generative engine optimization (GEO) is how brands adapt, shaping how they’re retrieved and represented inside those systems.

Traditional SEO metrics miss a growing share of that visibility. Pages are now summarized, excerpted, and cited in environments where clicks are optional, and attribution is fragmented. When an AI-generated summary appears, users click traditional search results far less often — in one analysis, just 8% of the time.

That creates a measurement gap. Assessing this gap is where GEO metrics come in.

What visibility means in generative search

GEO focuses on whether AI systems can find, understand, and select your content when generating answers. In generative search, visibility is more than about being indexed or ranked. Your content must be used — cited, summarized, or incorporated — into AI responses.

GEO builds on SEO and AEO, shifting the focus from where content ranks to how clearly it can be interpreted and trusted in context.

In practice, that means optimizing for:

  • Extractability: Can this be easily summarized?
  • Credibility: Is this a trustworthy source to cite?
  • Relevance: Does this directly resolve the query?

That’s where GEO metrics become useful.

8 core GEO metrics brands need to track in 2026

GEO performance shows up across a distinct set of signals that reflect presence, usage, and downstream impact.

1. AI citation frequency

AI citation frequency measures how often your brand, website, content, or experts are cited in AI-generated answers.

This is one of the clearest GEO metrics because it shows whether generative systems consider your content useful enough to reference.

Track citation frequency across:

  • Google AI Overviews.
  • Google AI Mode.
  • Perplexity.
  • ChatGPT search.
  • Gemini.
  • Copilot.
  • Claude, where source visibility is available.
  • Industry-specific AI tools and assistants.

Citation frequency should be tracked at the topic level, not only the domain level. A SaaS company, for example, may want to know whether it’s cited for “customer onboarding software,” “product adoption metrics,” and “best tools for reducing churn” separately.

The goal is repeatable citation across high-value topics.

2. Share of Model Voice (SOMV)

Share of Model Voice measures how often your brand appears in AI-generated answers compared with competitors.

Traditional share of voice tells you how visible a brand is across search, media, or advertising. Share of Model Voice applies that idea to AI responses.

A simple way to calculate it:

  • SOMV = Brand appearances across a prompt set ÷ Total answers generated for that prompt set

For example:

  • You analyze 100 relevant prompts.
  • Your brand appears in 28 of the resulting AI-generated answers.
  • Your Share of Model Voice is 28%.

This metric is especially useful for competitive categories because AI answers often compress the consideration set. A user doesn’t see 10 blue links. They may see three recommended vendors, two cited articles, or one synthesized answer.

That’s why relative presence matters more than absolute visibility.

3. Answer inclusion rate

Answer inclusion rate measures how often your owned content is used to generate an AI answer, regardless of whether the user clicks.

This differs from citation frequency. A brand may be mentioned without its content being cited. And a page may be used as supporting material even when the brand is not the central recommendation.

Track inclusion across informational, comparison, and decision-stage prompts.

For example, a B2B SaaS company in the SEO or analytics space might track prompts like:

  • Informational: “What is generative engine optimization?”
  • Exploratory: “How should brands measure AI search visibility?”
  • Comparison: “SEO vs GEO vs AEO”
  • Category-level: “Best GEO tools for B2B SaaS”
  • Decision-stage: “How do I evaluate GEO platforms?”

This metric helps identify which content formats are easiest for AI systems to retrieve and summarize. 

In many cases, clear definitions, comparison tables, statistics pages, glossaries, and answer-first explainers perform better than broad thought leadership pages because they’re easier to extract and reuse.

4. Entity recognition and authority

Entity recognition measures how well AI systems understand who your brand is, what it does, and what topics it should be associated with.

This matters because generative systems don’t only match keywords. They interpret entities, relationships, topical authority, and corroborating signals.

Strong entity recognition means AI systems can accurately connect your brand to:

  • Your company name.
  • Products and services.
  • Founders or executives.
  • Authors and subject-matter experts.
  • Industry categories.
  • Locations.
  • Use cases.
  • Awards, partnerships, and third-party mentions.
  • Knowledge graph data.
  • Structured data.

Google’s guidance for AI features emphasizes that the same fundamentals still apply: make content accessible, maintain a strong page experience, and use structured data to help systems interpret what’s on the page.

In practice, inconsistencies across these signals make it harder for AI systems to reliably connect your brand to the right topics.

5. Sentiment in AI responses

Sentiment measures how AI systems describe your brand.

Tracking mentions isn’t enough. Brands also need to know whether AI-generated responses frame them as credible, outdated, expensive, risky, innovative, niche, enterprise-grade, beginner-friendly, or anything else.

You can monitor:

  • Positive, neutral, and negative descriptions.
  • Recurring adjectives or claims.
  • Incorrect comparisons.
  • Outdated product details.
  • Missing differentiators.
  • Reputation issues.
  • Hallucinated features or limitations.

This is where GEO overlaps with PR and brand management. AI-generated answers can shape perception before the user ever reaches your site.

6. Prompt coverage

Prompt coverage measures how many relevant prompts surface your brand. This is the GEO version of keyword coverage, but prompts are more conversational, specific, and intent-rich.

A strong prompt set should include:

  • Informational prompts.
  • Comparison prompts.
  • “Best” and “top” prompts.
  • Problem-aware prompts.
  • Solution-aware prompts.
  • Buyer-stage prompts.
  • Role-specific prompts.
  • Use-case prompts.
  • Local or industry-specific prompts.
  • Follow-up prompts.

For a cybersecurity company, “best cybersecurity platforms” is only part of the picture. Relevant prompts also look like:

  • “How do mid-market companies reduce phishing risk?”
  • “What tools help security teams manage vendor risk?”
  • “Compare managed detection and response providers.”
  • “What should a CISO look for in an incident response partner?”

Prompt coverage shows whether your brand is visible across the way people actually ask AI systems for help.

7. Content retrieval success rate

Content retrieval success rate measures how often AI systems pull from your owned content when answering relevant prompts. This is where it gets technical.

If your content isn’t crawlable, structured, fresh, or easy to parse, it may struggle to appear in generative outputs, regardless of subject-matter strength.

You should evaluate:

  • Crawlability.
  • Indexability.
  • Internal linking.
  • Page speed.
  • Schema markup.
  • Clear headings.
  • Answer-first formatting.
  • Author attribution.
  • Publication and update dates.
  • Canonical handling.
  • Robots.txt and AI crawler access rules.
  • Content freshness.
  • Source clarity.

Gaps in any of these areas reduce the likelihood that your content is retrieved and used — even when it’s the best answer available.

8. Conversion influence after AI interaction

Conversion influence measures how visibility in AI-generated outputs contributes to downstream business outcomes. That connection isn’t always direct — and it’s rarely cleanly attributed.

A user may see your brand in an AI answer, search your name later, visit directly, ask a colleague, or convert through a paid retargeting path.

Still, brands should track directional signals:

  • AI referral traffic.
  • Assisted conversions.
  • Branded search lift.
  • Direct traffic changes.
  • Demo or lead quality from AI-referred sessions.
  • Returning visitors after AI visibility spikes.
  • Sales conversations mentioning ChatGPT, Perplexity, Gemini, or AI Overviews.
  • Pipeline influenced by AI-discovery queries.

AI search visitors convert at a 23x higher rate than traditional organic search visitors, even though AI traffic volume was much smaller, according to Ahrefs.

That’s the measurement nuance: AI search may drive fewer sessions, but the sessions that do occur can be higher-intent.

Get the newsletter search marketers rely on.


Tools and methods for tracking GEO metrics

GEO measurement is still in its early stages, and no single platform captures the full picture. Most brands will need a mix of automated tools, manual audits, analytics configuration, and competitive testing.

Emerging GEO analytics platforms

A growing set of tools — from established SEO platforms to GEO-native products — now track how brands appear across AI-driven search experiences.

For example:

  • Semrush AI Toolkit surfaces visibility trends tied to AI-driven search.
  • SE Ranking AI Visibility Tracker monitors brand presence across AI-generated outputs.
  • Profound focuses on AI citation frequency, sentiment, and competitive visibility.
  • Peec AI tracks brand presence and representation across AI systems.

The category is still evolving, but early tools give brands a way to move from assumptions to actual visibility data.

Prompt testing frameworks

Manual prompt testing is still useful, especially when building a baseline. Create a controlled prompt set by topic, funnel stage, persona, and geography. 

Run those prompts consistently across the same AI platforms. Capture:

  • Whether your brand appears.
  • Which competitors appear.
  • Which sources are cited.
  • How your brand is described.
  • Whether the answer is accurate.
  • Whether your owned content is cited.
  • Whether the answer changes across repeated tests.

Because AI answers can vary, single-prompt testing isn’t enough. Track patterns over time.

Analytics and logs

Use GA4, server logs, CRM fields, and referral data to identify traffic and conversions from AI platforms — particularly shifts in direct, branded, and assisted conversions.

Track known AI referrers, including ChatGPT, Perplexity, Gemini, Copilot, Claude, and other AI tools, where possible. Treat this as directional rather than complete, because many AI-influenced journeys show up as direct, branded search, or otherwise unattributed traffic.

Search Console and traditional SEO tools

Search Console still matters, even as clicks decline.

Impressions show whether content is being surfaced, while query data highlights where AI Overviews are absorbing demand, where branded search is increasing, and where content may need restructuring for answer inclusion.

Traditional SEO tools remain useful for technical health, content gaps, backlinks, keyword demand, and competitive research. GEO measurement builds on that foundation, tracking how content is surfaced in AI search.

How to build a GEO measurement framework

Start with a baseline. Choose 5-10 core topics you want AI systems to associate with your brand. For each, map prompts across the user journey. Then build a dashboard across four categories — and assign each to a clear action:

Visibility: Where do we show up?

  • AI citation frequency.
  • Share of Model Voice.
  • Prompt coverage.
  • Answer inclusion rate.

Accuracy and reputation: How are we represented?

  • Sentiment in AI responses.
  • Message consistency.
  • Misinformation or hallucination rate.
  • Competitive framing.

Technical and content: Can our content be used?

  • Content retrieval success rate.
  • Schema coverage.
  • Crawlability.
  • Freshness.
  • Entity consistency.

Business impact: Does it drive outcomes?

  • AI referral traffic.
  • Assisted conversions.
  • Branded search lift.
  • Direct traffic movement.
  • Lead quality.
  • Pipeline influenced by AI discovery.

Review these metrics together, not in isolation. Use them to decide what to update, expand, or deprioritize. Finally, connect the framework to business goals.

A publisher may prioritize citations and source inclusion. A B2B SaaS company may focus on category prompts and comparison visibility. An ecommerce brand may look at product recommendations, review sentiment, and visibility across discovery surfaces.

There’s no universal GEO dashboard — only the one that helps your team decide what to do next.

Turning GEO metrics into action

GEO metrics are only useful if they change what teams do next. Define the topics you want to be known for, track how those topics show up across AI systems, and use that data to decide what to update, expand, or deprioritize.

Treat visibility as a feedback loop. If your brand isn’t appearing, refine the content. If it’s appearing inconsistently, strengthen the signals around it. If it’s showing up but misrepresented, correct the source.

Over time, the advantage goes to teams that act on these signals consistently — not just the ones that track them.

How to use Google and LLM insights to improve international SEO

How to use Google and LLM insights to improve international SEO

Many companies expand internationally by duplicating their U.S. website, translating the language, and keeping the same architecture, navigation, and content structure across markets.

Then performance drops. International versions may convert at half the rate of the original site or struggle to gain traction altogether.

The issue usually isn’t translation. It’s assuming users in different markets search, navigate, and evaluate information the same way.

Using insights from Google SERPs and LLMs, here’s how to localize website architecture and navigation for international SEO.

How to use Google to localize content

Google’s SERP interface is localized for individual markets. Each element — menu order, topic filters, questions, tags, AI structures — reflects learned user behavior.

For example, if you search for a topic or product in the UK and Italy, you’ll get different interfaces: The Italian site might show two shopping options, while the UK site puts images at position two. These aren’t arbitrary — they’re algorithmic predictions based on observed behavior in each specific region.

Google has already done the user research. You just have to extract the signals systematically. Every SERP element is optimized through behavioral data, for example:

  • Menu order reflects click-through analysis across millions of users.
  • Topic filters represent observed refinement patterns.
  • People Also Ask (PAA) boxes aggregate real user confusion points.
  • Image tags cluster search behavior patterns.
  • AI Overviews encode entity relationship patterns that a model has learned.

9 signals to create a localization framework

Use these nine SERP interface elements to contain localization intelligence.

  • Menu order/filters reveal primary and secondary search intent. They are localized and dynamic — their order changes due to seasonalities, changes of intent, content behaviors, and breaking news.
  • Topic filters show hierarchical refinement patterns (2-3 levels deep). They are influenced by trends and seasonalities, and Google mixes classic search topics with shopping filters.
  • People Also Ask (PAA): Three levels are enough for discovering patterns and recurring entities through clustering.
  • People Also Search For (PASF) are similar to PAAs but are related searches showing journey connections. In this case, a three-level depth is sufficient to obtain meaningful data.
  • Image search tags for entity search: Each tag is also an entity related to the searched entity, or an attribution of that entity. They place entity associations in a visual search context.
  • AI Overview fan-outs are AI-predicted follow-up questions from Google.
  • AI Mode fan-outs are conversational search path predictions, ideal for exploring entities and triplets.
  • Google web guides are pillar pages that break down a topic into subtopics. It’s ideal for understanding how Google reasons around a subject.
  • Multi-LLM comparative analyses examine how ChatGPT, Gemini, and Perplexity structure their answers. LLM answers help identify both the universal semantic core shared across regions and the region-specific entities that emerge when prompted with local context. This reveals which entities matter globally versus locally.

Table of nine localization framework signals

SignalWhatWhyHow to (manual)How to (with tools)
1. Search Menu OrderReveals primary and secondary search intentMenu position shows how Google classifies query intent per marketOpen incognito browser, set location to target city, search query, record visible menu items in exact orderBrightLocal for location simulation
2. Topic FiltersShows hierarchical refinement patterns (2-3 levels deep)Maps directly to content hub organizationScroll below search bar to “Refine this search” section, document filter chips, click each to reveal sub-levelsTopically.io, Chrome DevTools (inspect filter elements), Python/Selenium for automation
3. People Also AskUser confusion points and anxiety aggregated from real searchesDirect blueprint for FAQ sections and pillar page H2 structureLocate PAA box, document visible questions, click each to expand and reveal related questions (2 levels deep), use incognito to avoid personalizationAlsoAsked.com (visualizes PAA trees), ValueSERP API, SerpAPI for automation
4. People Also Search ForJourney paths and related searches showing sequential behaviorReveals related entities users expect to find connected; informs internal linkingScroll to bottom of search results, document 8-12 related searches shown automaticallyTopically.io, Semrush (“Related Keywords”), Ahrefs (“Also talk about”), SerpAPI
5. Image Search TagsEntity search associations (visual and general); multi-word tags reveal co-occurring entitiesTag frequency = entity salience; informs which entities need visual contentClick Images tab, observe tag chips below search bar, document all visible tags (8-15), note multi-word tagsTopically.io, SerpAPI (image search with tags), Selenium scripts
6. AI Overview Fan-OutsGoogle’s AI-predicted follow-up questions; entity relationships the model learnedSpecifically informs Google AI Overview, AI Mode, and Web Guide structure; shows content sequencing for user journeyN/AQforia by iPullRank, Gemini API with Python/Colab
7. AI Mode Fan-OutsConversational search path predictions; multi-turn journey Google anticipatesReveals complex topic exploration paths; growing importance as Google pushes AI Mode heavilyN/AQforia by iPullRank, Gemini API with conversational context in Python/Colab
8. Google Web GuideGoogle’s editorial content organization; H2-level structure Google considers comprehensiveDirect blueprint for navigation structure (not URL paths); categories reveal information types users needPerform search, look for “Web Guide” or “Guide” SERP feature (appears ~20-30% of queries), expand sections, document H2 headingsN/A (no tools available)
9. Multi-LLM Comparative AnalysisHow ChatGPT, Gemini, Perplexity structure answers to identical queries; consensus vs. unique entitiesConsensus entities = must-have content; weak/incomplete answers = information gain opportunities; validates citation-worthy contentEnter identical query in each LLM interface, copy full responses, document response length/format/entities/citations (for Perplexity), perform in local language per marketOpenAI API (ChatGPT), Google Gemini API, Perplexity API – all via Python/Colab for batch processing and entity extraction

Scaling with international SEO

Here’s an example of a product breakdown between international sites:

  • 148 products × 6 query variants = 888 queries
  • Four markets = 3,552 combinations
  • Nine signals = 31,968 data points

However, you don’t need all 31,968 data points. Patterns emerge across 15 to 20 products, roughly 10% to 15% of the catalog. Entity relationships repeat across product categories, so sampling 15 products across factions can reveal critical localization patterns.

How to transform data into taxonomy

Let’s say there’s a hypothetical website based on the Star Wars movies called “SWLegion.com,” which sells tabletop wargaming miniatures. It has several products across factions, eras, and types.

Below is SWLegion.com’s complete URL structure across four markets.

CategoryU.S. (root)UK (/en-gb/)Italy (/it-it/)Spain (/es-es/)
STORE HOME/store//en-gb/store//it-it/negozio//es-es/tienda/
TYPE OF UNIT CATEGORIES
Accessories/store/accessories//en-gb/store/accessories//it-it/negozio/accessori//es-es/tienda/accesorios/
Battle Force Packs/store/battle-force-packs//en-gb/store/battle-force-packs//it-it/negozio/pacchetti-forza-battaglia//es-es/tienda/paquetes-fuerza-batalla/
Battlefield Expansions/store/battlefield-expansions//en-gb/store/battlefield-expansions//it-it/negozio/espansioni-campo-battaglia//es-es/tienda/expansiones-campo-batalla/
Commander Expansions/store/commander-expansions//en-gb/store/commander-expansions//it-it/negozio/espansioni-comandante//es-es/tienda/expansiones-comandante/
Core Sets/store/core-sets//en-gb/store/core-sets//it-it/negozio/set-base//es-es/tienda/sets-basicos/
Operative Expansions/store/operative-expansions//en-gb/store/operative-expansions//it-it/negozio/espansioni-operative//es-es/tienda/expansiones-operativas/
Personnel Expansions/store/personnel-expansions//en-gb/store/personnel-expansions//it-it/negozio/espansioni-personale//es-es/tienda/expansiones-personal/
Starter Sets/store/starter-sets//en-gb/store/starter-sets//it-it/negozio/set-iniziali//es-es/tienda/sets-iniciales/
Unit Expansions/store/unit-expansions//en-gb/store/unit-expansions//it-it/negozio/espansioni-unita//es-es/tienda/expansiones-unidad/
Upgrade Expansions/store/upgrade-expansions//en-gb/store/upgrade-expansions//it-it/negozio/espansioni-potenziamento//es-es/tienda/expansiones-mejora/
FACTION FILTERS
Shadow Collective/store/shadow-collective//en-gb/store/shadow-collective//it-it/negozio/collettivo-ombra//es-es/tienda/colectivo-sombra/
Mercenaries/store/mercenaries//en-gb/store/mercenaries//it-it/negozio/mercenari//es-es/tienda/mercenarios/
Galactic Empire/store/galactic-empire//en-gb/store/galactic-empire//it-it/negozio/impero-galattico//es-es/tienda/imperio-galactico/
Galactic Republic/store/galactic-republic//en-gb/store/galactic-republic//it-it/negozio/repubblica-galattica//es-es/tienda/republica-galactica/
Rebel Alliance/store/rebel-alliance//en-gb/store/rebel-alliance//it-it/negozio/alleanza-ribelle//es-es/tienda/alianza-rebelde/
Separatist Alliance/store/separatist-alliance//en-gb/store/separatist-alliance//it-it/negozio/alleanza-separatista//es-es/tienda/alianza-separatista/
TYPOLOGY FILTERS
Heroes/store/heroes//en-gb/store/heroes//it-it/negozio/eroi//es-es/tienda/heroes/
Varies/store/varies//en-gb/store/varies//it-it/negozio/varie//es-es/tienda/varios/
Infantry/store/infantry//en-gb/store/infantry//it-it/negozio/fanteria//es-es/tienda/infanteria/
Tools/store/tools//en-gb/store/tools//it-it/negozio/strumenti//es-es/tienda/herramientas/
Vehicles/store/vehicles//en-gb/store/vehicles//it-it/negozio/veicoli//es-es/tienda/vehiculos/
ERA FILTERS
All Eras/store/all-eras//en-gb/store/all-eras//it-it/negozio/tutte-ere//es-es/tienda/todas-eras/
Age of Rebellion/store/age-of-rebellion//en-gb/store/age-of-rebellion//it-it/negozio/era-ribellione//es-es/tienda/era-rebelion/
The New Republic/store/the-new-republic//en-gb/store/the-new-republic//it-it/negozio/nuova-repubblica//es-es/tienda/nueva-republica/
Fall of Jedi/store/fall-of-jedi//en-gb/store/fall-of-jedi//it-it/negozio/caduta-jedi//es-es/tienda/caida-jedi/
Reign of the Empire/store/reign-of-the-empire//en-gb/store/reign-of-the-empire//it-it/negozio/regno-impero//es-es/tienda/reino-imperio/
CONTENT SECTIONS
Lore Section/lore//en-gb/lore//it-it/lore//es-es/lore/
Rules Section/star-wars-legion/rules//en-gb/star-wars-legion/rules//it-it/star-wars-legion/regole//es-es/star-wars-legion/reglas/
Mini Painting Academy/mini-painting-academy//en-gb/mini-painting-academy//it-it/accademia-pittura-miniature//es-es/academia-pintura-miniaturas/
About Us/about-us//en-gb/about-us//it-it/chi-siamo//es-es/sobre-nosotros/

Extract entities across signals

Using the above product catalog as an example, use each product as a query seed.

Start manual, with 10-15 products to internalize patterns. Then automate with APIs/Python, and store in a CSV/JSON. Cross-reference entities to identify co-occurrence patterns.

Combine all nine signals into a unified dataset. Then, extract entities mentioned across signals.

Weighted co-occurrence analysis

Track which entities appear together across signals. This reveals which concepts users naturally connect in their thinking.

Each signal has a different reliability weight based on how directly it reflects user intent:

  • LLM mentions: 3.0 (high confidence — models trained on usage patterns)
  • Query fan-outs: 2.5 (AI predicts relationships from observed behavior)
  • PAA: 2.0 (actual user questions connecting entities)
  • PASF: 2.0 (sequential journey connections)
  • Image tags: 1.5 (visual/entity search context)
  • Topic filters: 1.0 (broad categorization)

For example, say there’s a significant variation in entity relationship complexity across markets, measured as total weighted co-occurrence scores (sum of all entity pair connections, weighted by signal reliability):

  • U.S.: 2,639.5 total weight
  • UK: 2,359.0 total weight
  • Spain: 2,266.0 total weight
  • Italy: 1,084.5 total weight

This means the U.S. and UK show 2x more entity relationship complexity than Italy, indicating more complex user journeys requiring deeper content architectures.

Cross-market entity patterns

Not all entities matter equally across markets. Your content strategy depends on recognizing three distinct patterns:

  • Universal entities (all four markets): These appear consistently across the U.S., UK, Spain, and Italy. Users everywhere expect this content.
  • Market-specific: These entities show concentrated interest in just one market based on current signal validation. Cover these entities deeply in their market of reference but maintain lighter coverage in other markets. In future quarterly re-analysis, verify if interest for these entity types has increased in other targeted markets to determine whether to expand coverage depth accordingly.
  • Regional (2-3 markets): These entities appear in most but not all markets, requiring selective deployment. Build content, deploy to 2-3 markets, and evaluate ROI before expanding.

Ontology pattern recognition

Beyond individual entities, track how different types of entities connect. This reveals what content formats work in each market.

Entities cluster into four categories: 

  • Products (actual sellable items)
  • Lore (Star Wars universe entities)
  • Rules (game mechanics)
  • Painting (techniques and processes)

Cross-ontology co-occurrence reveals which content types users expect:

  • When products and lore entities appear together frequently across signals, users think in terms of narrative context for purchases:
    • Product × Lore = Battle scenario content (example: “AT-ST” + “Battle of Hoth” = Hoth battle guide)
  • When products and painting entities co-occur, users research techniques for specific models:
    • Product × Painting = Unit-specific technique guides (example: “Clone Trooper” + “blue markings” = 501st painting tutorial)
  • When painting and lore entities connect, users want thematic aesthetic guidance:
    • Painting × Lore = Themed painting content (example: “terrain” + “Scarif” = tropical planet terrain tutorial)
  • When lore entities cluster together, users compare or navigate between story elements:
    • Lore × Lore = Era/faction comparisons (example: “Clone Wars” + “Galactic Civil War” = timeline guide)

Market-specific pattern differences

These ontology patterns vary dramatically by market, revealing which entities matter, how users think about connections, and how to optimize internal linking architecture. Here’s an example weighted co-occurrence analysis

USA: Product × Lore, weight 60.0 (highest of any market)

  • What this means: American users discover products through lore narratives — build battle scenarios linking story to miniatures.
  • Internal linking strategy: From the “AT-ST Walker” product page, prominently link to /lore/battle-of-hoth/ with anchor text emphasizing narrative context (“Deploy the AT-ST in the iconic Battle of Hoth”). From lore pages, link back to related products within battle scenario descriptions.

UK: Painting × Lore, weight 15.0 (unique to UK and U.S. only)

  • What this means: British users want battle-themed painting guides — content like “Paint a Hoth snow base” works here but is less relevant elsewhere.
  • Internal linking strategy: From /mini-painting-academy/snow-base-tutorial/, link to /lore/battle-of-hoth/ and to relevant product pages like “Snowtrooper Unit Expansion.” Create bidirectional links between painting techniques and the lore/battle contexts where those techniques apply.

Spain: Product × Lore, balanced at 27.0 each

  • What this means: Spanish users balance story interest with product focus — equal emphasis needed.
  • Internal linking strategy: Moderate internal linking between product and lore pages. From “Luke Skywalker Commander” product page, include links to both /lore/luke-skywalker/ and related products. Avoid over-emphasizing either connection type.

Italy: Product × Lore weight 10.5 (weakest)

  • What this means: Italian users don’t connect lore to products — skip elaborate battle scenarios. Focus on product specs and painting basics.
  • Internal linking strategy: Minimize product-to-lore internal links. From product pages, prioritize linking to /mini-painting-academy/ tutorials and related products by faction or unit type. Keep lore pages separate from product discovery paths.

Get the newsletter search marketers rely on.


How to validate your framework

Entities should appear in 3+ signals to be validated. One appearance could be an anomaly or noise.

False-positive check

Signals reveal what users reference, not always what they want. For example, a site appears across multiple markets in various signals, so it’s confirmed as a universal entity in LLM responses across all markets. But its presence in Image Search tags is minimal.

  • Interpretation: Users ask about the site as a reference point but aren’t searching for images of its products extensively.
  • Strategy: Build a comparison article/FAQ, not extensive image galleries or deep informational content.
  • Validation question: Does the signal show what users want or what they’re using for context?

Coverage gap analysis

For example, let’s say signal validation reveals dramatically different entity landscapes across markets — in other words, how many distinct, validated entities appeared in 3+ signals per market:

  • U.S.: 31 entities
  • UK: 28 entities
  • Spain: 29 entities
  • Italy: 16 entities

Italy has half the entity coverage of other markets, revealing a fundamental difference in how Italian users approach this product category — a strong strategic signal. 

If Italian users show concentrated interest in fewer entities, with heavier emphasis on foundational questions (for example, PAAs) rather than deep entity exploration, they’re asking, “what is this?” and “how does this work?”

There’s an information gain opportunity here: While competitors might translate all 31 US entities to Italian, creating shallow content Italian users don’t need, you can dominate the 16 entities that actually matter to this market with comprehensive, beginner-focused content.

Actions to take:

  • Italy needs foundational 101-level content rather than deep entity exploration.
  • FAQ-driven approach matches PAA dominance in Italian signals.
  • Invest in clear product specifications, basic painting tutorials, and simple rule explanations.
  • Build comprehensive coverage of the 16 validated entities before considering the other 15.
  • Monitor quarterly. If Italy’s validated entity count grows, market maturity increases, and expand coverage accordingly.

You’re not trying to force-fit U.S. models onto Italian users, you’re serving the actual information needs for this market.

How to structure internal architecture

Maintain a consistent technical structure across all markets with canonical tags, hreflang, CMS architecture, and analytics.

For the complete structure of the SWLegion.com example, see its full architecture.

Ecommerce section:

  • U.S. (root): /store/, /store/{category}/, /store/{filter}/
  • UK: /en-gb/store/, /en-gb/store/{category}/, /en-gb/store/{filter}/
  • Italy: /it-it/negozio/, /it-it/negozio/{categoria}/, /it-it/negozio/{filtro}/
  • Spain: /es-es/tienda/, /es-es/tienda/{categoría}/, /es-es/tienda/{filtro}/

Content sections:

  • U.S. (root): /lore/{entity}/, /star-wars-legion/rules/{topic}/, /mini-painting-academy/{guide}/, /about-us/
  • UK: /en-gb/lore/{entity}/, /en-gb/star-wars-legion/rules/{topic}/, /en-gb/mini-painting-academy/{guide}/, /en-gb/about-us/
  • Italy: /it-it/lore/{entità}/, /it-it/star-wars-legion/regole/{argomento}/, /it-it/accademia-pittura-miniature/{guida}/, /it-it/chi-siamo/
  • Spain: /es-es/lore/{entidad}/, /es-es/star-wars-legion/reglas/{tema}/, /es-es/academia-pintura-miniaturas/{guía}/, /es-es/sobre-nosotros/

Slug localization:

  • Store slugs fully localized (/store/ → /negozio/ → /tienda/).
  • Content section slugs localized where natural (/rules/ → /regole/ → /reglas/, /mini-painting-academy/ → /accademia-pittura-miniature/).
  • Entity slugs within content localized for official translations (Spain: /es-es/lore/conde-dooku/ vs English /count-dooku/).

What stays consistent

  • Path structure: /lore/, /store/, /rules/ exist everywhere even if entity coverage or category emphasis differs.
  • Product inventory: Physical products remain the same across markets (same 148 SKUs), though merchandising and filtering emphasis may vary.
  • Core navigation sections: All markets have Store, Lore, Rules, Mini Painting Academy, About Us, but internal linking architecture and content depth within each section adapts to market signals.

Entity coverage

Create a master entity list flagged by market validation. This will become your strategic content roadmap, preventing duplication while ensuring comprehensive coverage where it matters.

Entities cluster into two strategic categories:

  • Universal entities validated across all 4 markets: Darth Vader, Luke Skywalker, painting, terrain, miniatures, core factions (Galactic Empire, Rebel Alliance, Separatist) — these form your foundation and users everywhere expect this content.
  • Market-specific entities showing concentrated validation in one or two markets: 501st Legion (U.S./UK only), Shatterpoint comparison (Italy only), Wookiees (Spain only) — these are your localization differentiators.

Phase 1 build: Start with universal entities. Build 12-15 cornerstone pages, translate to all four markets for 48-60 total pages. These establish a baseline coverage across your entire international footprint.

Phase 2 build: Add market-specific entities. Create 25-35 localized pages to be deployed selectively only to validated markets. A 501st Legion deep-dive may go live in the U.S. and UK but not in Italy or Spain.

Total strategic content: 73-95 pages across four markets. This is a better, more refined strategy than covering 148 product entities × four markets, adding lore/rules/painting content for all entities across all markets, which would create dozens of wasted pages. 

How to implement an AI roadmap

Building out your international SEO can present some challenges. Here are some roadblocks and strategies to do it right. 

Implementation challenges

Let’s look at some hurdles to implementing AI to search.

CMS limitations

Most CMS platforms aren’t designed for entity-level localization. What’s needed is conditional page creation based on market validation.

For example: Add a “Target Markets” custom field to your CMS with checkboxes for different markets — U.S., UK, Italy, Spain, in our example. 

Content team scaling

Creating dozens of localized pages requires subject matter expertise, native language writers, and cross-market coordination. 

Start with one market — the second-largest, not the largest, to learn with a lower risk. Build 5-10 entity pages, validate traffic and conversions, and then scale to other markets only when ROI is proven.

Maintenance 

Markets evolve, new products launch, entities gain or lose relevance, and signals need periodic re-analysis. 

Re-run an abbreviated nine-signal analysis on the top 20 entities on a quarterly basis. Look for significant shifts: If entities drop from 3+ signals to one signal, consider deprecating content.

Continuous intelligence systems

Here are some tools to help monitor AI systems:

  • Wikipedia edit monitoring: Create watchlists for 10-15 key entities per market, and set email alerts for significant edits. Major additions or edit wars signal rising interest — if that happens, review entity page content and update accordingly.
  • Reddit velocity tracking: Track comment velocity on entity mentions. Entities mentioned in 5+ threads in one week (an unusual spike) should be investigated. 
  • TikTok and Instagram trends analysis: Monitor trending hashtags and viral content patterns related to your product categories. Rising hashtag usage or viral content patterns can indicate emerging entity interest before they appear in traditional search signals.
  • Google Trends “rising” analysis: Monitor “rising” queries monthly (not absolute volume). Queries with +100% week-over-week growth signal emerging interest. 

Building a roadmap

Now that you know what roadblocks lie ahead, here’s how to implement the plan.

Month 1: Foundation

  • Choose one market for learning and prototyping. Select 10-15 products to sample and conduct a systematic nine-signal analysis.
  • Create an entity list with co-occurrence weights and 3-5 validated market-specific entities.

Months 2-3: Content creation

  • Build universal pillar pages and translate to all markets, and build market-specific entity hubs, starting with one initially. Implement internal linking based on co-occurrence weights.

Months 4-6: Validation and expansion

  • Monitor entity coverage rates, LLM topic visibility, and market-specific traffic growth.

Months 7-12: Full multi-market rollout

  • Expand to all markets. Run continuous intelligence systems, including: Wikipedia watchlists, Reddit monitoring, TikTok/Instagram trends, and schedule quarterly signal re-analysis.

How to measure success

After implementing changes and incorporating AI into your international search strategy, here’s how to determine what’s working and where to improve.

Entity coverage rate

This metric tells you if you’re covering entities that actually matter to users in each specific market, not just translating pages indiscriminately.

  • Formula: (Entity pages built / Total validated entities from signal analysis) × 100
  • Example: Your signal analysis validated 28 entities in the UK (entities appearing in 3+ signals). You built dedicated pages for 22 of these entities. Your entity coverage rate is: 22/28, or 79%.
  • Target: 70%+ coverage for each priority market.

Consider the strategic difference. For example, let’s say your UK site covers 79%, or 22 of 28 validated entities, focusing resources on entities users actually search for, ask questions about, and engage with across multiple signals. 

While a competitor translates 148 product entities, achieving “100% coverage” on paper, but wastes resources covering entities UK users show minimal interest in.

Your 21% gap (6 uncovered entities) isn’t a failure, but a strategic prioritization. 

These lower-priority entities can be added if quarterly re-analysis shows their signal validation strengthening — moving from 2 signals to 3+ or appearing in additional signal types.

Tools for tracking entity coverage:

  • Screaming Frog: Crawl your site and count entity pages by market subfolder.
  • Google Sheets: Cross-reference validated entity lists against live URL inventory.

LLM topic visibility

Track whether your site appears in LLM responses for key topics, not individual citation counts. The goal is to measure topical authority, not vanity metrics.

For ChatGPT/Gemini/Perplexity/Claude: Use WAIKay.io to systematically track your visibility across multiple LLMs. The platform allows you to:

  • Set up monitoring for specific queries across ChatGPT, Gemini, Perplexity, and other AI platforms
  • Track whether your domain appears in responses (mentions, summaries, citations)
  • Monitor visibility changes over time with historical tracking
  • Generate reports showing presence/absence per topic, per LLM

For AI Overviews/AI Mode: Use Semrush One to monitor Google’s AI-powered SERP features. Alternative tools, such as Ahrefs, Advanced Web Rankings, and SISTRIX (AI Overview presence reporting), offer similar capabilities.

Target benchmarks:

  • Universal topics: Visibility in 2+ LLMs across all markets.
  • Market-specific topics: Visibility in 2+ LLMs for a specific market’s language queries.

This validates if your content quality and entity coverage are sufficient for LLMs to consider you an authoritative source worth including in their responses. Lack of visibility signals content gaps or insufficient topical depth.

Incorporate AI and LLMs into your international SEO today

Most international sites treat taxonomy as infrastructure: build once, maintain minimally, and refresh every 2-3 years during a website redesign. 

Our SWLegion.com example started with an identical architecture across four markets. Implementing this strategy, we showed how to localize architecture and navigation and optimize for each market.

This strategy builds something fundamentally different — architecture that breathes with market behavior, responding to signals rather than assumptions. You’re cultivating taxonomy rather than just maintaining a website.

Your new taxonomy will reflect current user behavior and also anticipate and adapt to behavioral shifts before competitors notice that the market has changed.

AI SEO punishes lazy marketing strategies by Brick Marketing

Over the past few decades, digital marketing has settled into a stable system. While it spans SEO, content marketing, social media, and digital advertising, many programs have relied on a predictable core that didn’t always use every available channel.

This gave digital marketers a sense of predictability and comfort. For years, teams stuck with what worked and refined execution through the same familiar framework. AI search has disrupted that comfort and exposed our inconsistencies. To succeed with AI SEO, we need a much more comprehensive approach.

AI SEO rewards strategic marketing 

Over the past 15 to 20 years, digital marketing settled into a predictable rhythm, with each channel playing a defined role. 

Content marketing, social media, SEO, paid advertising, and email followed similar strategies with little variation. Little happened outside this structure, and many of us grew “lazy.” 

The structure worked, so we let other strategies fall away.

The problem? It created a false sense of security. We should have been doing more all along, and those broader strategies are now driving real visibility in AI search.

AI has disrupted digital marketing in ways that weren’t obvious at first. It’s changed user search behavior and how brands are evaluated. 

Traditional search relied on algorithms and a primary source. AI pulls from multiple inputs across many sources.

Those sources should already exist. They’re your marketing — the way you present your brand across platforms like social media, third-party directories, press releases, brand mentions, and more. In short, anything outside your website.

In this system, your website and the strategic marketing that supports it are just one part of the whole. It’s now one of many sources AI uses to understand your brand and offer. AI search reflects the strength of marketing across all these sources.

Visibility Is not limited to your website

One of the biggest disruptions AI has caused is that the website is no longer central to your marketing strategy or visibility. It’s now part of a much larger ecosystem. You still need a strong website, as always, but you must account for how much broader the landscape has become with AI search.

While driving traffic to your website still matters, it’s no longer the only focus. The goal used to be maximizing website visibility — achieve that, and results would follow. That still works to a degree, but treating it as the only path to visibility is outdated.

AI pulls information from a wide range of sources — articles, brand mentions across platforms, third-party profiles, published content — and all of it shapes how it understands who you are and what you do. 

Your website is just one part of this broader scope. If you focus only on your website, you limit AI’s ability to find you.

This is where most marketing programs fall short, especially those built before AI. To modernize, your brand must be visible across a much wider scope. 

AI SEO requires an intentional presence

AI favors brands that show up online with intent. They’ve built a cohesive ecosystem across the wider internet. 

A segmented marketing approach may have worked in the past, but it no longer has the same impact. We got away with it because when each channel performed well, it still felt effective and met our goals.

AI doesn’t allow this anymore. It favors brands with many connected signals, because it links them across the internet. It evaluates how your brand appears across these sources and looks for consistent messaging and expertise. 

When these signals align, your AI visibility strengthens. When they’re scattered or your broader presence is weak, your AI visibility is weak.

This is why it’s important to develop a marketing strategy that accounts for this. A brand with a coordinated presence across the internet — across its website and other marketing channels — is what’s required today. 

Lazy marketing strategies are exposed

This is the real issue with “lazy marketing.” We define it as sticking to the old approach — treating each channel separately and relying on the same tactics that have always worked. That approach may have delivered results before, but those days are gone.

At the time, this approach still delivered results. A strong SEO foundation consistently drove leads, and paid advertising offered similar predictability. These tactics worked so well that there was little need to go beyond them.

We need to go beyond it to keep up. Your brand needs to show up across multiple sources — that’s how AI finds you. If your competitors are already building their presence, you need to do the same or get left behind. They’ll take more space in AI-generated answers than you.

This means that if you have gaps in your marketing, you can’t hide them anymore. AI exposes these inconsistencies and forces you into the broader digital space.

Transition into the era of AI search 

Now is the time to move beyond the old model and adopt a new understanding of what works in digital marketing. The old approach no longer works on its own — it must be part of a broader system.

These are the strategies we should have been using all along: press releases, directory listings, and marketing beyond your own website.

AI search rewards an all-encompassing marketing strategy because that’s what works. Core channels like social media, SEO, content marketing, and paid advertising still matter, but they’re not enough on their own. 

AI hasn’t changed the rules. It has enforced them.

This is what has always worked in marketing. The difference now is that you can’t get away with doing less.

Microsoft: AI answers need a smarter search index

Microsoft Bing traditional search vs. grounding systems

The search index is evolving from ranking pages to supporting AI-generated answers. In a technical blog post “on the evolving technical characteristics of the index,” published today, Microsoft Bing explained why AI search needs a different indexing system than traditional web search.

Traditional search vs. grounding systems. Microsoft said traditional search can rely on users to self-correct, while AI systems need stronger evidence because they generate committed answers.

  • Traditional search is built around documents. Users get ranked links, scan the results, and decide what to trust.
  • Grounding systems are built around supportable facts with clear sourcing. The AI uses that information to generate a combined answer, where mistakes can compound across sources and reasoning steps.

They shared this table:

Traditional search vs groundingfor AI responses

What’s different. Traditional ranking is optimized for relevance. Grounding must also assess whether information is accurate, up to date, clearly sourced, and sufficient to support an answer. That means AI indexes need to account for whether:

  • A page’s meaning survives chunking and transformation.
  • The source is clearly identified.
  • The information is fresh enough to use.
  • Important facts are actually retrievable and groundable.
  • Grounding systems need to detect disagreements between sources before generating an answer.

Stale content. Stale content creates a different risk in AI answers, Microsoft said. In traditional search, it may hurt ranking quality. In grounding systems, it can directly generate a wrong answer.

Contradictions. A search engine can rank one source above another and let users decide. Grounding systems must recognize conflicting evidence before turning it into a single answer, according to Microsoft.

Retrieval is more complex. Search is usually a single interaction: query in, ranked results out. Microsoft said grounded AI systems may retrieve information repeatedly, refine based on earlier results, combine evidence, and reassess confidence before answering.

How indexing quality is measured. Search quality has traditionally focused on ranking performance and user behavior. Grounding systems also need to measure factual fidelity, source quality, freshness, evidence strength, and conflict detection. The industry is still learning how to rigorously measure grounding quality, Microsoft said.

Grounding doesn’t replace search. Grounding builds on existing search infrastructure while adding systems focused on evidence quality, attribution, and deciding when an AI system should avoid answering, Microsoft said.

Why we care. For decades, search indexes helped determine which pages users should visit. Today, AI grounding determines which information supports an AI-generated answer. Microsoft described grounding as a new layer on top of traditional search, built for AI systems that need higher confidence in the information they use. That shift could push brands and publishers to focus more on creating information AI systems can confidently use.

The blog post. Evolving role of the index: From ranking pages to supporting answers

Google Analytics Data API adds cross-channel conversion reporting (alpha)

Google is expanding its Analytics Data API to include cross-channel conversion reporting — giving developers programmatic access to paid and organic performance data.

What’s happening. The new feature, currently in alpha, allows Google Analytics and Google Ads users to pull conversion data across channels via the API — mirroring what’s available in the Conversion performance report in the Analytics interface.

This means developers can now access the same insights without relying on manual reporting.

Why we care. As measurement becomes more complex, advertisers need unified views of performance across paid and organic channels. This update enables teams to automate reporting, integrate data into their own systems and build more advanced analysis workflows.

It’s particularly valuable for businesses managing multiple platforms and looking to centralise performance data.

The caveat. This feature may not be available to every Google Analytics property yet. Google says it is actively working to expand access, and advertisers should check with their support teams to confirm eligibility.

What to watch:

  • When the feature moves beyond alpha and becomes widely available
  • How advertisers use API access to build custom attribution models
  • Whether more reporting capabilities are added to the Data API

Bottom line. By bringing cross-channel conversion data into the API, Google is giving advertisers and developers more control over how they access, analyse and act on performance data.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • SEO Manager Job Description Location Cardiff, Wales Salary £50,000 per year We are looking for an experienced and driven SEO Manager to join our growing digital marketing team in Cardiff. This is a fantastic opportunity for someone who genuinely understands search engine optimisation beyond the usual buzzwords and recycled LinkedIn nonsense. We want someone who […]
  • Full-time Description Are you a talented digital fundraiser who is passionate about progressive causes? Do you thrive working in a fast-paced environment? Are you excited about collaborating with a team to build something greater than what can be built alone? Avalon is a full-service direct marketing fundraising consulting agency, and we are looking for a […]
  • Do you love nerding out on SEO and working with clients? If your friends & family are sick of hearing about your latest search rankings, then we’re your kind of people and you will love this job. $80k in year 1 + potential bonuses You will get an absolute masterclass on SEO and working with […]
  • At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
  • Job Description Attention: Kapitus is aware that individuals posing as recruiters may be communicating with job seekers about supposed positions with Kapitus. Kapitus has received reports that the content and method of communication can vary, but messages may contain requests for payment (e.g., fees for equipment or training) and/or for sensitive financial information. Kapitus will […]
  • Remote (Canada-wide) · Full-time · $75,000–$90,000 CAD About Webserv Webserv is a digital marketing agency that helps mission-driven businesses — particularly in behavioral health — grow through SEO, paid media, and conversion-focused web strategy. We’re a tight-knit team that values curiosity, ownership, and the kind of work that actually moves the needle for our clients. […]
  • The Basics: Growth Plays is hiring a Senior SEO/AEO Manager based in the US, Canada or LATAM, to support and manage ongoing customer engagements and relationships. You’ll act as the main point of contact for your clients, and focus on building relationships and trust while driving strategy-aligned growth for the long term. This role is […]
  • Company: Local Leads DigitalLocation: RemoteJob Type: Contract, 1099Compensation: 100% Commission, Uncapped Job SummaryLocal Leads Digital is hiring an Independent Sales Representative to help grow adoption of the L.O.C.A.L. Tool, our local SEO fulfillment solution. This is a fully remote, 1099 independent contractor opportunity for someone who is confident in outbound sales and comfortable building their […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • Your day-to-day: Analyze domain performance and organic trends. Conduct keyword research, track key SEO metrics and identify organic growth opportunities. Prepare SEO content briefs and ensure content aligns with best practices and search intent. Conduct market and competitor analysis to identify optimization opportunities. Monitor LLM and AI search landscape trends and identify opportunities to improve […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • The Acquisition Marketing Specialist is responsible for leading the execution of strategies designed to attract new sponsors and donors to Unbound. This individual will manage a variety of marketing and communication initiatives with the goals of building brand awareness among new audiences, developing and nurturing leads to the point of conversion, and ultimately acquiring new […]
  • Since 1913, Marathon Electric has been dedicated to providing customers with quality motors and generators for targeted applications. Marathon Electric became part of WEG Group in 2024. Founded in 1961, WEG is a global electric-electronic equipment company, operating mainly in the capital goods sector with solutions in electric machines, automation and paints for several sectors, […]
  • Primary Geography: San Diego, CA ViiV Healthcare (VHC) is a global specialty HIV company, the only one that is 100% focused on researching and delivering new medicines for people living with, and at risk of, HIV.  ViiV Healthcare is highly mission-driven in our unrelenting commitment to being a trusted partner for all people living with […]
  • PURPOSE   The Assembler I is an entry-level position that performs repetitive assembly operations to mass-produce products such as windows, doors, trusses, panels, or stairs by performing the following duties.   ESSENTIAL DUTIES AND RESPONSIBILITIES Places parts in specified relationship to each other. Bolts, clips, screws, cements, or otherwise fastens parts together. Cut products to […]
  • Current Employees: If you are a current Monument Health employee, please apply via the internal career site by logging into your Workday Account and clicking the “Career” icon on your homepage. Primary Location Rapid City, SD USA Department CS NPPC, LLC Scheduled Weekly Hours 40 Starting Pay Rate Range $59,800.00 – $74,755.20 (Determined by the knowledge, […]

Other roles you may be interested in

Senior Manager, SEO/AEO, ActiveCampaign (Remote)

  • Salary: $140,500 – $193,200
  • Identify opportunities for technical improvements across the ActiveCampaign website, prioritize them based on their potential business impact, and collaborate with cross-functional stakeholders to implement them.
  • Pioneer LLM optimization and Answer Engine Optimization (AEO) by developing content strategies that ensure ActiveCampaign is the authoritative source material used by LLMs.

SEO Marketing Manager, Care.com (Hybrid, Dallas, TX)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Manager, SEO, KINESSO (Hyrid, New York, NY)

  • Salary: $90,000 – $95,000
  • Manage senior analysts and help analysts grow into the next level of their career.
  • Translate clients’ business goals and marketing objectives into successful search engine optimization strategies.

Senior Marketing Manager, Vanguard Renewables (Remote)

  • Salary: $120,000 – $182,000
  • Work closely with CMO and RNG team to develop and execute a strategic marketing roadmap aligned with business priorities.
  • Serve as the primary marketing liaison for RNG team, acting as the connective tissue between the Marketing and Commercial groups.

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌