Normal view

Yesterday — 9 May 2026Search Engine Land

Veronika Höller talks on a perfectly set-up but poor performing campaign

9 May 2026 at 01:40

In this episode of PPC Live The Podcast, I sit with Veronika Höller to unpack a real-world PPC mistake — from campaigns that looked perfect on the surface to the deeper issues that were quietly killing performance.

From “perfect” campaigns to zero revenue

Veronika Holler didn’t walk into a broken account. Quite the opposite. Everything looked right — clean structure, strong creatives, solid budgets, conversions coming in. On paper, it was a high-performing PPC setup.

But there was one problem: it wasn’t driving revenue.

That disconnect forced a deeper look beyond surface-level metrics. Because while impressions, clicks and conversions were ticking up, the campaigns weren’t actually delivering business impact — and that’s where things started to unravel.

The real issue: nothing stood out

The turning point didn’t come from inside the account. It came from looking outside it.

During competitor research, Veronika realised the brand sounded just like everyone else. The messaging blended into the market. There was no clear reason for a user to choose them over competitors.

From a user perspective, the ads weren’t wrong — they were just forgettable. And in a crowded category, “good” isn’t enough.

That insight reframed the entire problem: it wasn’t a performance issue. It was a positioning issue.

Starting again — from scratch

Instead of tweaking the existing campaigns, Veronika made a bold call: rebuild everything.

That meant new messaging, new creatives, and a new strategic foundation. One key shift was defining not just ideal customers, but also who they didn’t want to target — using anti-ICPs to sharpen the messaging.

They also introduced stronger localisation, tailored landing pages by market, and platform-specific strategies instead of copying campaigns across channels.

It wasn’t optimisation. It was a reset. And it worked.

The mistake that nearly broke everything

But earlier in her career, Veronika made a far more painful mistake — one that many PPC marketers will recognise.

She applied a recommended target CPA… without increasing the budget.

The result? Campaigns stopped delivering. Performance tanked. And worst of all, it went unnoticed over a weekend.

By Monday, the damage was clear — and the client was not happy.

Owning the mistake — and fixing it fast

There was no hiding from it.

Veronika immediately admitted the mistake, explained what happened, and took responsibility. That honesty changed the outcome. While the client was initially frustrated, the situation de-escalated quickly because there was no deflection — just a clear plan to fix it.

The lesson stuck: don’t blindly apply recommendations, and always understand the full context before making changes.

Why failure is part of getting good

For Veronika, mistakes aren’t something to avoid — they’re essential.

“You can only be good if you fail,” she said.

That mindset now shapes how she works and how she mentors others. Mistakes aren’t a sign of incompetence — they’re a sign that work is being done, tested, and improved.

And more importantly, sharing those mistakes helps others avoid repeating them.

The biggest issue she still sees today

Despite all the changes in PPC, one problem keeps showing up: tracking.

Broken implementations, over-reliance on micro conversions, and poor setup in tools like Google Tag Manager are still common.

In a world of smart bidding and automation, bad data doesn’t just limit performance — it actively misleads it.

Without clean tracking, even the best campaigns will fail.

AI won’t fix average marketing

Veronika is clear on one thing: AI is not a shortcut to better performance.

If you feed it average data, you’ll get average results.

Too many marketers rely on AI tools to analyse accounts without first understanding what needs to be improved. But AI can’t create differentiation — it can only optimise what’s already there.

Standing out still requires human thinking, strategy, and creativity.

The mindset that matters now

The biggest takeaway isn’t tactical — it’s mental.

Don’t aim for perfection. Don’t blindly follow recommendations. And don’t assume tools will do the thinking for you.

Instead, trust your instincts, test your ideas, and accept that mistakes are part of the process.

Because in performance marketing, the real risk isn’t failing — it’s playing it safe and blending in.

💾

A “perfect” PPC account delivered zero revenue—until one critical mistake revealed what was really going wrong.
Before yesterdaySearch Engine Land

Google Ads surfaces Tag Manager controls inside its interface

8 May 2026 at 19:39
Why Google Ads auctions now run on intent, not keywords

Google appears to be pulling parts of the Google Tag Manager interface directly into Google Ads — a move that could simplify how advertisers manage tracking and tags.

What’s happening. Advertisers are spotting a new “Manage” option inside the Data Manager section of Google Ads that opens Tag Manager controls without leaving the platform.

The update was first shared by Marthijn Hoiting and Adriaan Dekker, who posted screenshots showing Tag Manager elements embedded within the Google Ads environment.

Why we care. Tag setup and troubleshooting have long been a friction point for advertisers, often requiring multiple tools and technical handoffs.

Bringing Tag Manager functionality into Google Ads could reduce that complexity — especially for smaller teams or advertisers without dedicated dev support.

Zoom in. Inside the Data Manager interface, users can see connected data sources (including Tag Manager) and trigger management actions directly from within Google Ads.

That suggests Google is moving toward a more unified measurement workflow, where tagging, data connections and campaign setup live closer together.

Between the lines. This aligns with Google’s broader push to simplify measurement and improve data accuracy — particularly as privacy changes and signal loss make clean tracking more critical.

It also mirrors recent efforts to make tagging more accessible without heavy technical setup.

What to watch:

  • Whether full Tag Manager functionality gets embedded or remains partial
  • How this impacts workflows between marketers and developers
  • If this becomes the default way to manage tags for advertisers

Bottom line. Google is quietly reducing the gap between campaign setup and measurement — bringing tagging closer to where ads are actually managed.

First seen. This update was shared by Adrian Dekker on LinkedIn, who credited Data and Analytics specialist Marthijn Hoiting for spotting it.

Google to no longer support FAQ rich results

8 May 2026 at 19:03

Google will no longer support FAQ rich results as of May 7, 2026. This means you will no longer see FAQ rich results in the Google Search results going forward.

Plus, Google Search Console will stop reporting on FAQ structured data.

What Google said. Google posted a note at the top of the FAQ structured data developer documentation saying:

FAQ rich results are no longer appearing in Google Search. We will be dropping the FAQ search appearance, rich result report, and support in the Rich results test in June 2026. To allow time for adjusting your API calls, support for the FAQ rich result in the Search Console API will be removed in August 2026.

Remove code. You can remove the FAQ structured data from your code, if you want but you can also leave it. Other search engines may be able to continue to process it and use it for their own purposes.

Why we care. Rich results have helped web pages with click-through rates and get more traffic. FAQ rich results may have helped as well. But that is now no longer supported.

Keep an eye on your pages with FAQ structured data to see if your traffic from Google is impacted or not.

How to run prompt-level SEO experiments for AI search

8 May 2026 at 18:00
How to run prompt-level SEO experiments for AI search

As LLMs continue to grow, optimizing brand visibility in AI-generated responses is becoming increasingly important. Consumers are turning to these models for answers, recommendations, recipes, vacations, and nearly everything else imaginable.

But what happens if your brand isn’t included in those responses? Can you influence the outcome? And what are some proven ways to improve your brand’s inclusion and visibility?

That’s where structured experimentation comes in. Prompt-level SEO requires more than assumptions or one-off wins. It requires repeatable testing frameworks that help isolate what actually influences LLM responses.

Build prompt-level SEO tests with a hypothesis framework

There are countless recommendations on how to improve your LLM presence. Experimentation is key to discovering what works for your industry and brand.

Hypothesis-driven testing is the way we structure these tests for our brands. It breaks things down in a structured way that can be replicated across tests and situations.

This framework creates a common approach to testing and helps you quickly understand the test and its outputs. The structure consists of three main pieces: if, then, because.

  • If: This part provides the hypothesis: what is the test action?
    • “If we include more detailed product specifications in our content.”
  • Then: What will happen once the “if” section is completed? The outcome.
    • “Then we’ll see our brand get included in more product-specific prompts.”
  • Because: This is why you believe this will occur. What is the theory behind this test?
    • “Because LLMs value detailed and specific information in their prompt responses.”

This framework requires some basic fundamentals that ensure you’re thinking through the test. It also allows you to go back later and validate whether you have tested these specific elements in the past and what the premises, theories, and outcomes were. 

This helps because, as things change, the test elements may still be valid simply because the world shifts — changing the “because” section.

Key considerations before running prompt-level SEO tests

Before we get to the recommendations for testing best practices, here are some considerations when running these tests:

  • Model updates: These models are updated constantly. As some models move from 4.1 to 4.2, it’s time to revisit those results. How did the model change the inputs and outputs?
  • Prompt drift: Have you ever run the exact same prompt twice in a day or on consecutive days? Often, the results change. Therefore, running the prompt more than once and on consecutive days to evaluate the outcome is important to get a true baseline. This is no different from personalized search results. Brands get comfortable with the variance, but some averages surface and become the benchmark. Prompt testing works much the same way.

Now that you have the framework of the test, let’s think about the core elements of tests that can be used in prompt-specific testing.

How to isolate variables: A methodological approach

Designing a reliable prompt-level SEO experiment requires isolating a single causal variable. This is crucial for confidently attributing changes in LLM response inclusion or position to a specific action.

1. Content changes

When testing content modifications, the variable must be surgical. A common pitfall is changing too much at once (e.g., updating a product description and the page’s schema).

  • Best practice — The single-paragraph swap: Focus on modifying a single, targeted piece of text on the page, such as a product description, FAQ answer, or a specific feature bullet point.
  • Methodology: For true isolation, implement A/B testing with a control page containing the original content and a test page containing the modified content. The prompt should be designed to target the specific information you changed. Measure the brand’s inclusion rate and position-in-response over a defined period (e.g., seven days – keep in mind these models are moving at a variety of speeds. This work, much like SEO, isn’t a microwave, but more like an oven).

2. Structured data

Structured data (schema) provides explicit signals to both search engines and LLM ingestion layers. Testing this requires treating the schema update as the only change to the page.

  • Variable isolation: Test adding new properties (e.g., brand, model, and offer details) without altering the visible HTML text. This isolates the impact of the machine-readable layer.
  • Specific experiment — FAQ schema: A highly effective experiment is adding FAQ schema to pages that already have Q&A sections in their HTML, isolating the effect of the explicit schema markup on LLM ingestion. Our work with brands has demonstrated that adding FAQ schema to pages with Q&A sections makes those sections easier for LLMs to ingest.

3. Before-and-after prompt testing

This process involves establishing a stringent baseline, making the change, and then repeating the prompt query. This is an essential control method in lieu of true A/B testing on the LLM itself.

Protocol

  • Phase 1 (baseline): Execute a set of 5-10 target prompts daily for seven consecutive days to establish a true average of inclusion and position-in-response, accounting for prompt drift.
    • Action: Deploy the isolated change (e.g., content or schema update).
  • Phase 2 (measurement): Re-run the exact same set of prompts daily for the next seven days.
    • Analysis: Compare the average inclusion rate and position of Phase 1 versus Phase 2. This method is central to initial presence score analyses, such as using three buckets of 25 keywords and prompts for a total of 75 queries.

Get the newsletter search marketers rely on.


Encouraging reproducible experiments

With the speed of model evolution and the lack of detailed model insights, it’s difficult to ensure reproducibility of results. However, the goal is to move beyond simple “it worked once” findings to build a durable methodology.

Mandatory frameworks

Ensure every test is documented using the “if, then, because” hypothesis structure. This archives the premise, action, and expected outcome, allowing future teams to quickly validate whether a test remains relevant as LLMs evolve.

Technical integrity

  • Version control: Document the specific model and version used for testing (e.g., “Gemini 4.1.2”). This allows for easy comparison when a model update occurs.
  • Prompt libraries: Maintain an organized, time-stamped repository of the exact prompt queries used for baseline and measurement phases. This repository should track inclusion rate, position-in-response, and sentiment/framing for each query.

Infrastructure consistency

Define the testing environment (e.g., clear browser cache, no login state) and, where possible, use APIs or synthetic testing platforms to remove the impact of personalization and location bias, which is analogous to controlling for personalized search results in traditional SEO.

Moving beyond one-off wins in AI search

The key to prompt-level SEO is rigorous methodology. By adopting a hypothesis-driven approach, surgically isolating variables (content, entities, schema), and establishing strict before-and-after testing protocols, you can confidently move past speculation. 

The path to influencing LLM responses is paved with controlled, documented, and reproducible experiments.

SEO’s new goal in 2026: Recognition, not rankings

8 May 2026 at 17:00
SEO’s new goal in 2026: Recognition, not rankings

For the best part of two decades, we had a clear and accepted mandate: Get your brand to the top of the search results page. The problem was understood, the success metrics were agreed upon, and a supporting ensemble of tools, talent, and tactics was built around solving it.

Rankings were the scoreboard. Position 1 meant visibility. Traffic followed, and a brand’s value seemed to follow it.

It’s this core premise that is now under serious renegotiation with the search landscape changing more in the past 18 months than in the previous 10 years combined: 

  • AI Overviews are absorbing queries that previously generated clicks. 
  • AI/LLM platforms are becoming the first stop for research and decision-making. 
  • Zero-click is no longer a niche concern. It’s increasingly becoming the default.

What’s required now isn’t a new set of tactics. It’s a fundamental change in mindset. This is the SEO problem of 2026. Let me show you why recognition is your new goal and how to earn it.

The world changed faster than we did

SEO has always been a discipline that chases the algorithm.

We reverse-engineered signals, built strategies around them, and then scrambled to adapt when they shifted. Yes, there has always been the argument that if you cater your content to humans, you typically perform well.

That said, there have been obvious shifts in the types of content that resonate with the algorithm and those that don’t, dictated by changes to the Google algorithm at specific times.  

It was never a perfect or complete system; anyone who worked through (or has since learned about) the Panda and Penguin years will tell you the algorithm was always a shifting target. But the fundamentals remained stable. Aim to rank well, get found, win.

The shift we’re living through now isn’t a Google core update. Instead, we’re experiencing a structural change in how information is surfaced, interacted with, and ultimately trusted. 

AI has fundamentally transformed what searchers see

There’s a mental model baked into traditional SEO: If you’re at the top of the SERP, you’re visible. That model was accurate for a long time. But it isn’t now.

AI and LLM platforms — whether Google’s own generative features or external tools like ChatGPT, Perplexity, or Claude — don’t crawl the SERP and pick from the top results. They build understanding from training data, citation patterns, entity relationships in knowledge graphs, and signals about who is genuinely considered authoritative on a given topic. 

A high-ranking page can be largely invisible to these systems if the brand behind it hasn’t established recognition and preference (a.k.a., the quality of being known, cited, and trusted beyond its own domain).

Dig deeper: Entity-first SEO: How to align content with Google’s Knowledge Graph

Ranking no longer equals visibility

If your instinct is to treat it like another algorithm update, to find the new signals, maybe even game the new system, you are missing how dramatically the search landscape has shifted.

Think about it this way:

  • A brand can rank No. 1 for vital trophy keywords.
  • Their domain authority is strong.
  • Their technical SEO is clean, meeting best practices.
  • Their content team publishes weekly.
  • Their link profile is healthy.

By every traditional metric, this brand would be seen as winning. And yet, when their potential customers ask an AI or LLM platform which brand solutions to consider in their category, this brand doesn’t come up.

When Google’s AI Overview summarizes the landscape, it cites three competitors. When a journalist writes a roundup and asks an LLM to help research it, this brand is invisible.

They rank. Yet it’s as if they don’t exist — because ranking well doesn’t solve for recognition.

Even if the dashboards still report rankings and the tools still track positions one through ten, optimizing for a metric that’s losing its meaning is no longer a viable strategy.

User behavior is also changing

A growing share of search journeys now end before a user ever clicks a result, because they get the information they need without having to click through.

AI Overviews takes the majority of the headlines for this, but there has also been a huge shift in the SERP towards featured snippet expansions. This is further amplified by the adoption of  LLM-powered assistants that surface direct answers outside the traditional search environment.

Meanwhile, queries are increasingly conversational, with more and more users asking AI tools questions the way they’d ask a knowledgeable colleague or trusted friend, and they’re expecting thorough, contextualized, and personalized answers rather than a list of blue links.

In this world, the question your SEO strategy needs to answer is no longer “how do I rank?”, it’s “Is my brand the preferred option in the conversation?” 

And these are absolutely different questions that require different answers.

How AI ‘chooses’ brands to recognize

Think about how an AI model decides what to say when someone asks, “What’s the best CRM for a small B2B team?” It doesn’t run a Google search and summarize the top result. It draws on patterns it sees throughout the knowledge at its disposal:

  • Training data.
  • Industry publications.
  • Reviews.
  • Expert commentary.
  • Forum discussions.
  • Solution comparisons.

The brands that appear in that answer are the ones that have accumulated recognition across the broader landscape, not just the one that ranks.

This is becoming an invisible tax on brands that have focused exclusively on rankings. They may dominate the SERP today. But in the AI-mediated version of that same query, they’re absent.

“Recognition” doesn’t have to be a vague brand concept. It has specific, measurable components. Let’s break them down.

Brand awareness across the search universe

This is the most basic layer. Does your brand name appear, in context, across the search universe?

Not just on your own domain, but in industry publications, analyst reports, user reviews, forum discussions, podcast transcripts, and news coverage. You must also consider where audiences are spending time, because they are developing brand awareness on social-search destinations, too.

AI and LLM platforms are increasingly trained on and drawing from the wider internet when answering questions. Certain domains are massively outperforming others in terms of citations from these platforms, Semrush found. 

If your brand is only present on your own website, you’re harder to find and aren’t in the platforms’ go-to sources.

Topical authority 

This goes beyond keyword rankings. Topical authority means that when a given subject area comes up, your brand is consistently associated with it — not just by Google’s algorithms, but by writers, analysts, content creators, and communities. 

It’s the difference between a site that covers a topic and a brand that owns the conversation in people’s minds who discuss it.

The signal here isn’t domain authority. It’s authority, trust, and relevance (a.k.a., preference). You are asking, “Does our brand appear alongside the recognized leaders in our space?” and “When people discuss an essential topic, are we in the conversation?”

Dig deeper: Why topical authority isn’t enough for AI search 

Entity clarity

This is the most technical layer and the one most often overlooked. An “entity” in SEO terms is a clearly defined, consistently described “thing.” This could be: 

  • Your company.
  • Your product.
  • Key voice or person. 
  • Key topic or conversation.

Put simply, it’s something that knowledge systems can reliably identify and categorize.

If your brand’s description varies across your site, your Wikipedia page (if you have one), your Google Business Profile, your Crunchbase entry, and your LinkedIn page, you create ambiguity for every system.

This is as confusing for your human audience as it is for the AI/LLM layer trying to understand who you are and what you do.

Entity clarity means having a canonical, consistent answer to the questions:

  • What is this company?
  • What does it do?
  • Who does it serve?
  • How is it different?

Brands with strong entity clarity get pulled into knowledge graphs. They get cited. They get recognized.

Dig deeper: From links to brand signals: The new SEO authority model

Get the newsletter search marketers rely on.


6 things to get you started on the path to recognition

True recognition cannot be built overnight. Instead, your focus is on engineering discovery that develops recognition over time. With that in mind, here are six ways to begin the process:

1. Audit your entity presence

Go and look at how your brand is described in the places that matter: 

  • Google’s Knowledge Panel. 
  • Wikipedia (if applicable). 
  • Wikidata.
  • Social media conversations.
  • Key person/business LinkedIn profiles.
  • Your own “About” page. 

You should be asking if the messaging here is consistent. If your homepage describes you as “an AI-powered B2B sales platform” while the content you discuss and share on your YouTube says “CRM software for startups,” you have an entity problem. 

2. Fix the inconsistencies

Write a canonical description of your company — one clear, accurate, jargon-free paragraph — and work to get it reflected everywhere. Then mold the content format to the needs of the various platforms you want to show up on.

Alongside this, decide which conversations are most important to your brand and consistently look to own these topics. This is part engineering discovery, but it’s also developing your entity and the topics that contribute to that.

Dig deeper: Why entity authority is the foundation of AI search visibility

3. Create citable assets

There’s a difference between content that ranks on a SERP and content that gets cited.

Ranking content is optimized around keywords, and too often, content has become homogenized in trying to meet the expectations of an algorithm so that you can rank.

Citable content, on the other hand, is original, specific, and useful enough that other people (and AI/LLM platforms) want to reference it. Citable content is strong enough that your audience feels like they miss an integral part of a conversation by not featuring or citing the asset or source. 

Think original research and surveys, clear and ownable frameworks or methodologies, definitions that don’t yet exist clearly in your space, and data that journalists, analysts, creators, and bloggers actually want to quote or build upon.

If the only content on your site are search-optimized blog posts, ask yourself: 

  • Is there anything here that a writer at a key niche publication or a researcher at a relevant public body would want to cite? 
  • Is there anything that a content creator would want to build upon or explore further? 

If the answer is no, that’s the gap to close.

4. Build off-site recognition deliberately

This isn’t about traditional link building. It’s about building presence in the right conversations, be that industry publications, podcasts, analyst briefings, conference talks, social content, or community forums.

Every time your brand name appears in a meaningful context outside your own domain, you’re building the recognition signal that AI and LLMs draw on and that resonates with humans in the journey.

Prioritize quality of context over volume. A single, substantive mention in a respected publication is worth more than fifty low-quality directory listings.

5. Optimize for clarity and intent

A keyword is a moment. Intent is a journey. Traditional SEO has trained us to think in snapshots: a user types a query, we rank for it, we win.

But a real buying journey in 2026 looks nothing like that. It might start with a conversational AI query, move through a Reddit thread, surface a YouTube comparison, hit a review platform, and only then arrive at a branded search. The keyword at any single point is almost beside the point.

What matters is whether your brand shows up meaningfully across the full arc of that journey — not just at the moment someone is ready to convert.

Start by mapping intent honestly. 

  • What is someone actually trying to understand when they enter your space? 
  • What does the journey from problem-aware to solution-decided look like for your customer? 

Then audit where your brand is present, absent, or ambiguous across it.

The second part is clarity. As search becomes more conversational and AI-mediated, the brands that get surfaced are those that clearly communicate what they do, who they serve, and why they’re the right choice — consistently across every touchpoint. 

Vague positioning might survive a keyword-match algorithm. It won’t survive a language model deciding whether your brand is the right answer to a specific human question.

Be specific and consistent. Make sure your description holds up whether someone finds you on your own site, in a third-party review, or in an AI-generated summary.

Dig deeper: If you can’t say what problem your brand solves, AI won’t either

6. Start measuring recognition

Your current reporting probably tracks keyword rankings, organic traffic, and backlinks. I would argue that this should continue, but there should be a shift in the importance of these metrics versus the following signal:

  • [Brand] search volume: Are more people searching directly for you?
  • [Brand] + [Intent or Keyword]: Are more people associating you with specific topics?
  • Unlinked mentions: Is your brand name appearing in content that doesn’t link to you?

You can then use the following alongside these and begin to further understand if your brand is being recognized:

  • Increase in referral traffic.
  • Increase in direct traffic.
  • Increase in quality of traffic (measured in longer sessions, per user increase in pages viewed, purchases earlier in the journey).

This will then allow you to look towards the most important SEO metric there should ever be: revenue. Especially if you can assess and report on the development of average order value (AOV) and lifetime value (LTV) or the specific values of the pages that have seen higher traffic because of an increase in unlinked mentions and/or brand searches.

When you begin to think about these considerations, the most important shift isn’t adding new metrics to your dashboard. It’s changing what you treat as the primary signal.

Branded search volume, specifically branded search paired with intent, is one of the clearest indicators of genuine preference in the user journey and also the competitive landscape.

Someone searching for you by name, combined with a buying signal, isn’t discovering you. They’ve already decided you’re worth considering. That’s recognition doing its job.

The goal is to grow that signal deliberately, and then make sure that when someone arrives with that intent, you meet it head on.

A branded intent search that lands on a generic homepage is a wasted moment. These users are telling you exactly what they need. Your job as an SEO in 2026 is to have already built the page, the answer, the experience that closes the gap.

The supporting metrics — unlinked mentions, referral traffic, direct traffic, AOV, LTV — all tell you whether recognition is compounding into something commercially meaningful. 

And that’s ultimately the conversation that needs to happen in every boardroom and strategy session: Recognition isn’t a brand vanity play, it’s a revenue strategy.

Rankings as the primary focus have gotten us so far. Recognition, with a view and monitoring mindset on the signals identified here, is what takes us, the SEO’s role and importance to brands further than ever before.

Get ready for a longer game with a bigger potential to win

Here’s the uncomfortable truth about the recognition-first approach: It’s slower.

You can’t optimize your way to being well-known in the same way you can optimize your way to a ranking — and I think it’s what’s most intimidating to SEOs.

Recognition compounds over time, developed through consistent presence, genuine authoritativeness, relevance, and the slow accumulation of trustworthiness. But that’s also what makes it durable. 

Rankings fluctuate with every algorithm update, and the value of a No. 1 ranking is seemingly shrinking with every update due to the continued and increasing number of SERP features and AI/LLM integrations into the SERP.

Recognition, though, once established, is much harder to displace. To own AI-mediated search in the coming years, spend this period building something that AI systems — and the increasing number of humans utilizing them — genuinely recognize as authoritative.

The No. 1 ranking is a vanity metric if it ends up below the fold, stuck under a SERP of AI/LLM integrations and SERP features — ultimately ensuring nobody knows who you are.

Start building recognition. Your appearance in those top-of-page SERP features and AI/LLM integrations will follow.

Why intent alignment matters more than perfect technical SEO

8 May 2026 at 16:00
Why intent alignment matters more than perfect technical SEO

Improving technical SEO on your site may not be enough to move the needle these days. 

Once a site reaches technical parity with its competitors — the point at which a proper infrastructure no longer gives you an advantage — Google shifts its ranking criteria toward relevance. And relevance is determined by aligning with search intent. 

Let’s talk about how to make your site more relevant.

Why an intent mismatch may be suppressing your site’s performance

An intent mismatch occurs when the copy on a page doesn’t match what the user is expecting to find on it. This happens when pages aren’t relevant to a topic or have mismatched signals.

This generates poor behavior signals — users click through from a SERP, see that the page doesn’t answer their need, and leave. Google interprets these signals as evidence that the page doesn’t satisfy the query. 

This can lead to a decline in rankings, which means fewer users see the page, which means the behavioral signals worsen. It’s a feedback loop that technical SEO alone can’t resolve.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Technical SEO improvements may no longer make a difference

In the early stages of implementing an SEO strategy, the needle can move quickly. If a site is operating below the technical baseline needed for Google to properly evaluate it, applying simple fixes — such as fixed crawl errors, resolved duplicate content issues, improved page speed, and adding schema — can produce big gains.

However, after these changes, your site’s technical foundations are now comparable to those of your main competitors — you hit a ceiling. Now, Google isn’t ranking pages based on which ones it can access the easiest, but on those that best satisfy the user’s query. 

Your technical infrastructure, or lack thereof, no longer disadvantages you, but now the rules of the ranking game have changed.

This is where intent alignment becomes the primary lever for improvement. 

Signals that reinforce search intent

Elements that have an impact on a page’s intent, and how Google decides whether the intent matches the page, include: 

  • Click-through rate.
  • Engagement signals.
  • Core Web Vitals.
  • Schema type.
  • Internal linking anchor texts.
  • URL structure.

Click-through rate (CTR)

Click-through rate can be determined by your title tag, meta description, URL structure, and schema. It is also measured against intent. 

For example, if your title tag is optimized for a keyword but doesn’t match the user’s query, your CTR will drop. Google treats a low CTR as a relevance signal and adjusts rankings accordingly.

Engagement rate

Time-on-page, scroll depth, and interaction rates can suffer when intent doesn’t align with a page. 

If a user is searching to purchase something but lands on a how-to guide, they may exit that page within seconds. The same can be said of a user looking for an emergency plumber who lands on a page without a phone number. 

Engagement signals feed directly into how Google evaluates a page’s usefulness for a given query.

Core Web Vitals (CWV)

The three Core Web Vitals — Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — determine page loading speed.

A transactional page that loads slowly suffers more than a slow-loading informational article. With the transactional page, the user is ready to buy and their patience is minimal, whereas a reader in research mode can tolerate a longer wait. 

CWV thresholds matter everywhere, but their impact on conversion and bounce behavior is greater on high-intent pages. 

Schema type

Schema markup tells Google explicitly what type of content is on a page. Generally:

  • Article/HowTo is informational.
  • Product is transactional.
  • FAQ is informational and commercial.
  • Local business/event is navigational.

When schema type contradicts the content on a page, Google gets a conflicting signal, resulting in a traffic drop.

Internal linking anchor texts

The anchor text of internal links tells Google about the page that’s being linked to, including its intent. 

If a transactional landing page receives internal links with informational anchor text — “learn more about X,” rather than “get a quote for X” or “buy X” —  the intent signal Google receives about that page’s purpose gets diluted.

URL structure

Google uses URL patterns to infer page type. 

For example, URLs sitting under /blog/ are treated with informational bias. A product or service page buried under a blog path fights against that structural expectation, regardless of its content, and it may not rank well. 

Cannibalization and canonicalization

If your site has multiple pages targeting the same keyword but with different intents, neither is likely to rank well. They compete against each other and dilute the signal Google receives. 

To fix, use canonical tags to clearly signal which page is the preferred one for a given keyword, consolidate or redirect competing pages where appropriate, and ensure your internal linking reinforces the canonical choice.

Get the newsletter search marketers rely on.


How to fix intent misalignment

Here’s an example of a common intent mismatch and some steps to audit your content and fix it. 

What an intent mismatch looks like

For example, if a user searches for “financial analysis software,” they’re looking to buy software. The keyword phrase is highly transactional. 

But if your site targets this keyword phrase for an informational blog post that explains how a person can complete a financial analysis report themselves, this creates a mismatch.

The user is looking for a product that does the analysis for them, which means they want to compare features, understand pricing, see integrations, or book a demo. 

The keyword phrase should be applied to a dedicated product or landing page that clearly outlines functionality, benefits, use cases, and pricing. This would align more with the user’s needs, resulting in more inquiries, leads, and conversions.

Identify the intent of your pages

To fix intent mismatches, to start, compile a list of the top performing keywords that best describe your business and manually check the Google rankings for each.

This initial research will tell you exactly what type of page and copy you should have for these keywords. For example:

  • Knowledge panels, AI Overviews, and People Also Ask boxes usually appear for informational searches.
  • Paid results usually suggest commercial intent.
  • Shopping feeds suggest a transactional keyword.

Next, add the keywords to a spreadsheet and add a column for intent. Work down the list, adding whether you think the page is informational, commercial, transactional, or navigational. 

You can then create another column that states the type of page that will rank well: 

  • Informational: Blog or resource content.
  • Commercial: Service or landing pages.
  • Transactional: Collection, category, or product pages.
  • Navigational: Brand, specific service, or specific location pages.

See what your competitors are doing

Research your competitors’ pages for the keywords you’re targeting. Analyze and note what they have that your pages don’t have.

They may have:

  • Tables.
  • Comparisons.
  • Calculators.
  • Tools.
  • FAQs.
  • Reviews.
  • Step-by-steps.
  • Images.
  • Videos.
  • And more. 

Consider how to improve your own pages to match theirs. 

Measure your page’s performance based on intent metrics

Once you’ve made changes to your pages, track their performance to see whether they helped. Look at:

  • Clicks and impressions for intent-aligned keywords.
  • Rankings for core target queries.
  • Time on page.
  • Conversion rates, particularly those of previously underperforming pages.

Technical SEO still plays a decisive role

Technical SEO is still important, especially for complex, enterprise-scale sites. Here are some ways that technical SEO work can still move the needle significantly, in ways that content optimization alone can’t.

Crawl budget management

An ecommerce site with thousands of URLs can have its crawl budget consumed by low-value pages before its allotment reaches high-intent category and product pages that you want to rank. 

Cleaning up low-value pages is purely technical work and will ensure your crawl budget goes toward pages that count. 

International site architecture

Technical SEO is crucial when handling international sites that contain pages in multiple languages. 

A keyword that’s purely informational in one market may be transactional in another, reflecting different buyer behaviors and levels of market maturity. Hreflang implementation, regional subdomain or subdirectory structures, and URL strategies all affect whether the right page, with the right intent, reaches the right audience.

Log file analysis

A log file analysis will reveal which pages Google is successfully crawling and how frequently they are. For sites with intent alignment problems, Google often spends a disproportionate amount of attention crawling low-value or misaligned pages, while high-intent pages are visited infrequently. 

For small sites with a clean structure and limited number of URLs, technical SEO can reach parity quickly, so the need to shift to intent alignment happens sooner. For large, complex sites, technical and intent work often need to happen in parallel.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Technical SEO and intent need to work together

Technical SEO is still important today — think of it as a foundation that the rest of the site sits on. Pages that can’t be crawled, indexed, or rendered correctly will be unable to rank, regardless of how well their content matches user intent.

Think of intent alignment as the ceiling — it’s what determines how high a technically sound page can rank, and whether it converts the traffic it earns. 

Every page on a site should have a clearly defined intent, expressed in the right format, with the right content type. And they should also be supported by technical signals, be it schema, URL structure, relevant anchor text, etc., so that the page’s intent is constantly reinforced. 

Microsoft Ads expands custom columns to include all conversion metrics

8 May 2026 at 00:24

Microsoft Advertising is giving advertisers more flexibility in reporting, with custom columns now supporting all conversion metrics — a move aimed at deeper, more tailored campaign analysis.

What’s happening. According to Microsoft’s product liaison Navah Hopkins, advertisers can now build custom metrics using the full range of conversion data available in the platform.

This includes both all conversions and primary conversions, allowing marketers to align reporting more closely with their specific goals.

Why we care. Standard reporting often doesn’t reflect how businesses actually measure success. By expanding custom columns, Microsoft is enabling advertisers to create metrics that better reflect their own performance definitions — whether that’s based on lead quality, revenue or blended conversion actions.

This is especially useful for advertisers managing multiple conversion types or complex funnels.

More control over performance metrics. Advertisers can now create custom columns using ratios and combinations of metrics that matter most to them — such as cost per qualified lead, blended CPA or conversion rate based on primary goals.

Revenue and ROAS calculations will also reflect the values set at the conversion goal level, giving more accurate insights tied to business outcomes.

Between the lines. This update signals a shift toward more flexible, advertiser-defined measurement — rather than relying solely on platform-standard metrics.

It also reflects ongoing demand for better reporting customisation as campaigns become more automated and complex.

What to watch:

  • How advertisers use custom metrics to guide optimisation decisions
  • Whether this leads to more consistent reporting across teams and stakeholders
  • If similar flexibility expands across other areas of the platform

Bottom line. Microsoft is giving advertisers more control over how they measure success — turning custom columns into a more powerful tool for campaign analysis.

AI Max vs DSA: Advertisers question control as Google responds

7 May 2026 at 22:07
In Google Ads automation, everything is a signal in 2026

Advertisers are starting to push back on gaps in AI Max capabilities — particularly around landing page control — as Google continues its shift away from legacy Dynamic Search Ads (DSA).

What’s happening. In a LinkedIn exchange, digital marketing expert Gabriele Benedetti raised concerns about AI Max lacking the same level of URL-based targeting controls that DSA campaigns offered.

His point: DSA allowed advertisers to structure campaigns around website architecture — using categories, URL paths and page rules to guide where traffic lands. That level of control, he argued, is not yet fully replicated in AI Max.

Why we care. For many advertisers — especially those managing large or structured websites — aligning campaign structure with site architecture is key to performance. Losing granular control over landing destinations could impact relevance, user experience and ultimately conversion rates.

This highlights a broader tension in Google Ads today: automation vs control.

Google responds. Google Ads Liaison, Ginny Marvin responded, clarifying that AI Max does support several URL-based controls, including:

  • URL rules and combinations
  • Page feeds with custom labels
  • URL inclusions at ad group level and exclusions at campaign level

However, she acknowledged that not all DSA targeting rules are currently supported — such as “page contains” conditions.

Between the lines. Google is not removing control entirely — but it is reshaping how that control works. Instead of granular rule-building, advertisers are being pushed toward structured inputs like page feeds and labels that AI can interpret.

Migration reality check. For advertisers moving from DSA to AI Max, existing URL rules will carry over — but with limitations. Unsupported rules will remain active as read-only, meaning they’ll continue to function but cannot be edited.

That’s a temporary bridge, not a long-term solution.

What’s next. Google says it plans to expand controls further, including bringing content and title-based exclusions to the account level later this year.

This would complement AI Max’s existing “inventory-aware” features, which already exclude out-of-stock items automatically.

Bottom line. AI Max is evolving, but it’s not yet a full replacement for DSA when it comes to granular control — and advertisers are making that clear.

Dig deeper. Full discussion on LinkedIn.

Google AdSense removes browser back button trigger for vignette ads

7 May 2026 at 19:39

Google is dropping the back button trigger for AdSense vignette ads on June 15, 2026 due to the new Google search penalty for back button hijacking. Google wrote, “Starting June 15, 2026, the browser back button will no longer trigger a vignette ad.”

What is changing. Google explained that the back button trigger will no longer work after June 15th. The “change will apply automatically for all publishers who have opted in to “Allow additional triggers for vignette ads” and will take effect across all supported browsers (including Chrome, Edge, and Opera).” Google added.

A Google spokesperson told me these same updates will apply to Ad Manager as well.

Why the change. Google explained that the Google Search team “recently introduced a new policy against “back button hijacking” — a practice where websites or scripts interfere with a user’s ability to navigate back to their previous page. To ensure our publishers remain compliant with these latest user experience and search quality guidelines, we are removing the trigger that shows a vignette ad when the user navigates backward from the suite of vignette ad triggers.”

This comes after the search community called this out to Google and Google is making the right change here. Of course, some publishers will not be happy because that trigger may have earned them a lot of money.

Why we care. If you currently have the allow additional triggers for vignette ads setting on with AdSense, keep in mind, one of the triggers, the back button trigger, will be disabled on June 15th. It may impact your earnings, but it will ensure that your site does not get penalized by the back button hijacking penalty.

Google adds AI-powered bidding and demand-led budgeting to Search and Shopping

7 May 2026 at 19:00
When Google’s AI bidding breaks – and how to take control

Google is rolling out new AI-driven bidding and budgeting features across Search, Shopping and Performance Max — aimed at helping advertisers capture more demand without increasing manual effort.

What’s happening. Google is expanding its automation stack with updates like Journey-aware Bidding, Smart Bidding Exploration and demand-led budget pacing. Together, these changes are designed to help campaigns respond more dynamically to shifting consumer behaviour.

The focus: letting AI identify and act on opportunities advertisers may not see themselves.

Why we care. These updates aim to capture more conversions without increasing manual work, using AI to find new demand and optimise spend in real time. By improving how bids respond to full-funnel signals and how budgets adapt to peak demand, campaigns can become more efficient and less reliant on constant adjustments.

Ultimately, it’s about getting more value from the same budget while staying competitive in a fast-changing search landscape.

Smarter bidding gets more context. Journey-aware Bidding (beta) allows advertisers to feed more of the customer journey into optimisation, including non-biddable conversions. This gives Google AI a fuller picture of what leads to actual sales — not just initial actions like form fills.

At the same time, Smart Bidding Exploration is expanding beyond Search. Already delivering an average 27% increase in unique converting users, it will soon roll out to Performance Max and Shopping campaigns, helping advertisers tap into less obvious, incremental queries.

Budgets that follow demand. On the budgeting side, Google is building on its campaign total budgets feature, which allows advertisers to set spend across a defined period instead of relying on daily limits.

The next step is demand-led pacing — where AI automatically adjusts spend based on real-time demand, increasing budgets on high-opportunity days and pulling back during slower periods, without exceeding overall limits.

Advertisers using total budgets have already seen a reported 66% reduction in manual budget adjustments.

Why this is a big deal. Budget management has historically been one of the most manual parts of campaign optimisation. By automating pacing, Google is reducing the need for constant monitoring while aiming to improve efficiency.

What to watch:

  • How much control advertisers are willing to give up for automation
  • Whether incremental gains from exploration translate into profitable growth
  • How transparent these systems remain as they scale

Bottom line. Google is directing advertisers to AI to handle both bidding and budgeting — shifting the advertiser role from manual optimisation to guiding inputs and trusting the system to find growth.

5 JavaScript SEO lessons from top ecommerce sites

7 May 2026 at 18:00
5 JavaScript SEO lessons from top ecommerce sites

JavaScript SEO should be a solved problem by now. It isn’t.

Ecommerce sites keep hitting the same crawling, rendering, and indexing issues they were five years ago, now stacked on top of headless builds, AI-powered recommendations, and frameworks that can hide critical content from Google.

These top ecommerce players have figured out how to ship fast, modern JavaScript without sacrificing organic visibility. Here are five lessons worth stealing.

1. Chewy uses JavaScript for UX

Chewy is one of the largest online retailers of pet food and supplies in the U.S. They use Next.js, a React framework for building websites with built-in support for server rendering, static generation, and full-stack development features.

That means you can put important content in the initial HTML response without relying on client-side JavaScript.

Let’s look at a product page like the Benebone Wishbone Chew Toy.

Chewy product page

Navigate to View Page Source and you’ll see the product title, description, pricing, reviews, Q&A, and breadcrumb navigation all present in the initial HTML. Googlebot can access it on the first pass, without waiting for rendering.

Chewy page source

That’s important because if a web crawler like Googlebot encounters issues rendering your page, the important content can still be parsed on the first crawl. With the rise of AI chatbots, some of which still don’t render JavaScript, this has become even more important.

Not everything needs to be in the initial HTML, though. Without client-side JavaScript, the page would feel static and clunky.

Take the “Compare Similar Items” carousel. It’s loaded client-side, primarily there for shoppers. The internal links could offer some SEO benefit, but they’re not critical for indexing this page the way the title, description, and pricing are.

Chewy similar items carousel

Chewy gets this balance right. The content that matters most for indexing is available on initial load. Client-side JavaScript enhances the experience rather than delivering the content that needs to be indexed.

2. Myprotein makes navigation crawlable

Myprotein sells supplements, nutrition products, and some fitness apparel.

Their site is built on Astro, a content-first framework using Islands Architecture to ship zero JavaScript by default while supporting components from React, Vue, or Svelte.

Myprotein’s navigation is the part worth studying. It’s an important SEO area for ecommerce sites, and they get it right.

Myprotein navigation

View the source on any Myprotein page and the navigation links (categories, dropdown items, and footer links) are all in the initial HTML response. Astro makes this possible through its island architecture.

Myprotein source code

The navigation ships as an interactive island, meaning Astro will hydrate it with JavaScript as soon as the browser is ready. But JavaScript makes the flyout menus interactive. It doesn’t create them.

These links are also proper <a> elements with href attributes, which is what crawlers like Googlebot need to discover and follow links. Avoid using JavaScript click handlers to simulate navigation, such as:

<div onclick="navigate(item.slug)">Clear Protein Drinks</div>

A crawler won’t follow that. Use a standard anchor element instead:

<a href="https://us.myprotein.com/c/nutrition/protein/clear-protein-drinks/">Clear Protein Drinks</a>

Not every site gets this right. When navigation depends entirely on client-side rendering, there’s a window where it’s invisible or empty.

Googlebot processes JavaScript in a separate rendering pass that can lag behind the initial crawl, which can mean delayed discovery of internal links critical for crawl efficiency and link equity distribution.

3. Harrods embeds structured data in the HTML

Harrods is a luxury department store selling fashion, beauty, and homeware.

Their site is built on Nuxt, a Vue framework for building websites with built-in routing, server rendering, and static generation, plus an opinionated project structure.

Their structured data is delivered in the initial HTML response. View the source on any product page and you’ll find structured data inside a <script type="application/ld+json"> element. The Product schema includes the product name, images, description, brand, and an Offer with price, currency, availability, and seller.

Harrods page source

JSON-LD is the format Google recommends for structured data, and because it’s in the HTML response, Google can parse it on the first crawl pass without needing to render the page.

On JavaScript-powered sites, structured data can easily become a client-side dependency. If a framework fetches product data in the browser and generates JSON-LD from the response, that structured data only exists after JavaScript executes. The same is true for structured data injected through Google Tag Manager.

If markup is only added after the page loads, Google has to render the page to find it. Google has noted that dynamically generated Product markup can make Shopping crawls less frequent and less reliable, which matters when prices and availability change often.

By serving that structured data in the HTML directly, Harrods avoids this risk entirely.

Get the newsletter search marketers rely on.


4. Under Armour handles faceted navigation with JavaScript

Under Armour is a global sportswear brand selling athletic apparel, footwear, and accessories. Their site is built on Next.js, the same React framework Chewy uses.

A good place to see their JavaScript SEO in action is on category pages, where filters need to feel fast and interactive for shoppers, and be crawler-friendly.

Let’s look at the men’s shoes category page. When you apply a filter, say, selecting size 10, the product grid updates instantly without a full page reload. That’s client-side JavaScript updating the grid.

Under Armour porduct page

But the URL updates too. After selecting the filter, the URL becomes:

  • https://www.underarmour.com/en-us/c/mens/shoes/?prefn1=size&prefv1=10

A shopper can copy that URL, send it to a friend, or bookmark it, and land right back on the same filtered view.

Notice what the URL isn’t:

  • Not a hash fragment (#size=10), which doesn’t get sent to the server and is ignored by Google.
  • Not a mess of bracketed query strings (?filters[0][size]=10).
  • Not a dynamic route artifact like /shoes/[category]/ leaking into the live URL.

It’s a clean, readable query string with named parameters.

Under Armour is using the Next.js router to update the URL as filters change. Under the hood, it wraps the browser’s History API and uses the pushState() method to update the address bar without a reload.

When someone visits that same URL directly, the page loads with the filter already applied.

5. Manors Golf loads third-party scripts

Manors Golf sells golf apparel. Their site runs on Hydrogen, Shopify’s React-based framework for headless storefronts.

Hydrogen defers its own application scripts automatically since they load as ES modules. However, third-party scripts are the developer’s responsibility. On an ecommerce site, that can be a long list: reviews, chat, personalization, pixels, recommendations, payment scripts.

That matters for SEO in two ways. Render-blocking scripts hurt Core Web Vitals, most directly Largest Contentful Paint (LCP). They also give Googlebot more work to render the page, so it may get processed less reliably.

An external script (<script src="...">) without async or defer blocks HTML parsing. Async fetches in the background and runs when ready. Defer waits until parsing finishes.

Manors loads external scripts from 12 third-party domains, including Klaviyo, TikTok, Microsoft Clarity, and Gorgias.

A look at the Elements panel shows them all loading with async:

Manors async attribute

By loading third-party scripts with async, Manors keeps them from blocking the initial render. That protects LCP and reduces the work Google’s Web Rendering Service (WRS) has to do.

The balance between interactivity and crawlability

The issue isn’t that you’re using JavaScript. It’s what you’re using it for.

Googlebot can process JavaScript, but it’s slower and less reliable than reading HTML. The more your core content, structure, and navigation depend on JavaScript, the more room there is for things to go wrong.

The sites in this article all use JavaScript to enhance the experience rather than deliver it. Do that, and you won’t have to choose between a good user experience and good SEO.

8 GEO metrics to track in 2026

7 May 2026 at 17:00
8 GEO metrics to track in 2026

Search visibility no longer starts and ends with rankings. AI-driven search has changed where discovery happens — across Google, ChatGPT, Perplexity, and beyond.

Generative engine optimization (GEO) is how brands adapt, shaping how they’re retrieved and represented inside those systems.

Traditional SEO metrics miss a growing share of that visibility. Pages are now summarized, excerpted, and cited in environments where clicks are optional, and attribution is fragmented. When an AI-generated summary appears, users click traditional search results far less often — in one analysis, just 8% of the time.

That creates a measurement gap. Assessing this gap is where GEO metrics come in.

What visibility means in generative search

GEO focuses on whether AI systems can find, understand, and select your content when generating answers. In generative search, visibility is more than about being indexed or ranked. Your content must be used — cited, summarized, or incorporated — into AI responses.

GEO builds on SEO and AEO, shifting the focus from where content ranks to how clearly it can be interpreted and trusted in context.

In practice, that means optimizing for:

  • Extractability: Can this be easily summarized?
  • Credibility: Is this a trustworthy source to cite?
  • Relevance: Does this directly resolve the query?

That’s where GEO metrics become useful.

8 core GEO metrics brands need to track in 2026

GEO performance shows up across a distinct set of signals that reflect presence, usage, and downstream impact.

1. AI citation frequency

AI citation frequency measures how often your brand, website, content, or experts are cited in AI-generated answers.

This is one of the clearest GEO metrics because it shows whether generative systems consider your content useful enough to reference.

Track citation frequency across:

  • Google AI Overviews.
  • Google AI Mode.
  • Perplexity.
  • ChatGPT search.
  • Gemini.
  • Copilot.
  • Claude, where source visibility is available.
  • Industry-specific AI tools and assistants.

Citation frequency should be tracked at the topic level, not only the domain level. A SaaS company, for example, may want to know whether it’s cited for “customer onboarding software,” “product adoption metrics,” and “best tools for reducing churn” separately.

The goal is repeatable citation across high-value topics.

2. Share of Model Voice (SOMV)

Share of Model Voice measures how often your brand appears in AI-generated answers compared with competitors.

Traditional share of voice tells you how visible a brand is across search, media, or advertising. Share of Model Voice applies that idea to AI responses.

A simple way to calculate it:

  • SOMV = Brand appearances across a prompt set ÷ Total answers generated for that prompt set

For example:

  • You analyze 100 relevant prompts.
  • Your brand appears in 28 of the resulting AI-generated answers.
  • Your Share of Model Voice is 28%.

This metric is especially useful for competitive categories because AI answers often compress the consideration set. A user doesn’t see 10 blue links. They may see three recommended vendors, two cited articles, or one synthesized answer.

That’s why relative presence matters more than absolute visibility.

3. Answer inclusion rate

Answer inclusion rate measures how often your owned content is used to generate an AI answer, regardless of whether the user clicks.

This differs from citation frequency. A brand may be mentioned without its content being cited. And a page may be used as supporting material even when the brand is not the central recommendation.

Track inclusion across informational, comparison, and decision-stage prompts.

For example, a B2B SaaS company in the SEO or analytics space might track prompts like:

  • Informational: “What is generative engine optimization?”
  • Exploratory: “How should brands measure AI search visibility?”
  • Comparison: “SEO vs GEO vs AEO”
  • Category-level: “Best GEO tools for B2B SaaS”
  • Decision-stage: “How do I evaluate GEO platforms?”

This metric helps identify which content formats are easiest for AI systems to retrieve and summarize. 

In many cases, clear definitions, comparison tables, statistics pages, glossaries, and answer-first explainers perform better than broad thought leadership pages because they’re easier to extract and reuse.

4. Entity recognition and authority

Entity recognition measures how well AI systems understand who your brand is, what it does, and what topics it should be associated with.

This matters because generative systems don’t only match keywords. They interpret entities, relationships, topical authority, and corroborating signals.

Strong entity recognition means AI systems can accurately connect your brand to:

  • Your company name.
  • Products and services.
  • Founders or executives.
  • Authors and subject-matter experts.
  • Industry categories.
  • Locations.
  • Use cases.
  • Awards, partnerships, and third-party mentions.
  • Knowledge graph data.
  • Structured data.

Google’s guidance for AI features emphasizes that the same fundamentals still apply: make content accessible, maintain a strong page experience, and use structured data to help systems interpret what’s on the page.

In practice, inconsistencies across these signals make it harder for AI systems to reliably connect your brand to the right topics.

5. Sentiment in AI responses

Sentiment measures how AI systems describe your brand.

Tracking mentions isn’t enough. Brands also need to know whether AI-generated responses frame them as credible, outdated, expensive, risky, innovative, niche, enterprise-grade, beginner-friendly, or anything else.

You can monitor:

  • Positive, neutral, and negative descriptions.
  • Recurring adjectives or claims.
  • Incorrect comparisons.
  • Outdated product details.
  • Missing differentiators.
  • Reputation issues.
  • Hallucinated features or limitations.

This is where GEO overlaps with PR and brand management. AI-generated answers can shape perception before the user ever reaches your site.

6. Prompt coverage

Prompt coverage measures how many relevant prompts surface your brand. This is the GEO version of keyword coverage, but prompts are more conversational, specific, and intent-rich.

A strong prompt set should include:

  • Informational prompts.
  • Comparison prompts.
  • “Best” and “top” prompts.
  • Problem-aware prompts.
  • Solution-aware prompts.
  • Buyer-stage prompts.
  • Role-specific prompts.
  • Use-case prompts.
  • Local or industry-specific prompts.
  • Follow-up prompts.

For a cybersecurity company, “best cybersecurity platforms” is only part of the picture. Relevant prompts also look like:

  • “How do mid-market companies reduce phishing risk?”
  • “What tools help security teams manage vendor risk?”
  • “Compare managed detection and response providers.”
  • “What should a CISO look for in an incident response partner?”

Prompt coverage shows whether your brand is visible across the way people actually ask AI systems for help.

7. Content retrieval success rate

Content retrieval success rate measures how often AI systems pull from your owned content when answering relevant prompts. This is where it gets technical.

If your content isn’t crawlable, structured, fresh, or easy to parse, it may struggle to appear in generative outputs, regardless of subject-matter strength.

You should evaluate:

  • Crawlability.
  • Indexability.
  • Internal linking.
  • Page speed.
  • Schema markup.
  • Clear headings.
  • Answer-first formatting.
  • Author attribution.
  • Publication and update dates.
  • Canonical handling.
  • Robots.txt and AI crawler access rules.
  • Content freshness.
  • Source clarity.

Gaps in any of these areas reduce the likelihood that your content is retrieved and used — even when it’s the best answer available.

8. Conversion influence after AI interaction

Conversion influence measures how visibility in AI-generated outputs contributes to downstream business outcomes. That connection isn’t always direct — and it’s rarely cleanly attributed.

A user may see your brand in an AI answer, search your name later, visit directly, ask a colleague, or convert through a paid retargeting path.

Still, brands should track directional signals:

  • AI referral traffic.
  • Assisted conversions.
  • Branded search lift.
  • Direct traffic changes.
  • Demo or lead quality from AI-referred sessions.
  • Returning visitors after AI visibility spikes.
  • Sales conversations mentioning ChatGPT, Perplexity, Gemini, or AI Overviews.
  • Pipeline influenced by AI-discovery queries.

AI search visitors convert at a 23x higher rate than traditional organic search visitors, even though AI traffic volume was much smaller, according to Ahrefs.

That’s the measurement nuance: AI search may drive fewer sessions, but the sessions that do occur can be higher-intent.

Get the newsletter search marketers rely on.


Tools and methods for tracking GEO metrics

GEO measurement is still in its early stages, and no single platform captures the full picture. Most brands will need a mix of automated tools, manual audits, analytics configuration, and competitive testing.

Emerging GEO analytics platforms

A growing set of tools — from established SEO platforms to GEO-native products — now track how brands appear across AI-driven search experiences.

For example:

  • Semrush AI Toolkit surfaces visibility trends tied to AI-driven search.
  • SE Ranking AI Visibility Tracker monitors brand presence across AI-generated outputs.
  • Profound focuses on AI citation frequency, sentiment, and competitive visibility.
  • Peec AI tracks brand presence and representation across AI systems.

The category is still evolving, but early tools give brands a way to move from assumptions to actual visibility data.

Prompt testing frameworks

Manual prompt testing is still useful, especially when building a baseline. Create a controlled prompt set by topic, funnel stage, persona, and geography. 

Run those prompts consistently across the same AI platforms. Capture:

  • Whether your brand appears.
  • Which competitors appear.
  • Which sources are cited.
  • How your brand is described.
  • Whether the answer is accurate.
  • Whether your owned content is cited.
  • Whether the answer changes across repeated tests.

Because AI answers can vary, single-prompt testing isn’t enough. Track patterns over time.

Analytics and logs

Use GA4, server logs, CRM fields, and referral data to identify traffic and conversions from AI platforms — particularly shifts in direct, branded, and assisted conversions.

Track known AI referrers, including ChatGPT, Perplexity, Gemini, Copilot, Claude, and other AI tools, where possible. Treat this as directional rather than complete, because many AI-influenced journeys show up as direct, branded search, or otherwise unattributed traffic.

Search Console and traditional SEO tools

Search Console still matters, even as clicks decline.

Impressions show whether content is being surfaced, while query data highlights where AI Overviews are absorbing demand, where branded search is increasing, and where content may need restructuring for answer inclusion.

Traditional SEO tools remain useful for technical health, content gaps, backlinks, keyword demand, and competitive research. GEO measurement builds on that foundation, tracking how content is surfaced in AI search.

How to build a GEO measurement framework

Start with a baseline. Choose 5-10 core topics you want AI systems to associate with your brand. For each, map prompts across the user journey. Then build a dashboard across four categories — and assign each to a clear action:

Visibility: Where do we show up?

  • AI citation frequency.
  • Share of Model Voice.
  • Prompt coverage.
  • Answer inclusion rate.

Accuracy and reputation: How are we represented?

  • Sentiment in AI responses.
  • Message consistency.
  • Misinformation or hallucination rate.
  • Competitive framing.

Technical and content: Can our content be used?

  • Content retrieval success rate.
  • Schema coverage.
  • Crawlability.
  • Freshness.
  • Entity consistency.

Business impact: Does it drive outcomes?

  • AI referral traffic.
  • Assisted conversions.
  • Branded search lift.
  • Direct traffic movement.
  • Lead quality.
  • Pipeline influenced by AI discovery.

Review these metrics together, not in isolation. Use them to decide what to update, expand, or deprioritize. Finally, connect the framework to business goals.

A publisher may prioritize citations and source inclusion. A B2B SaaS company may focus on category prompts and comparison visibility. An ecommerce brand may look at product recommendations, review sentiment, and visibility across discovery surfaces.

There’s no universal GEO dashboard — only the one that helps your team decide what to do next.

Turning GEO metrics into action

GEO metrics are only useful if they change what teams do next. Define the topics you want to be known for, track how those topics show up across AI systems, and use that data to decide what to update, expand, or deprioritize.

Treat visibility as a feedback loop. If your brand isn’t appearing, refine the content. If it’s appearing inconsistently, strengthen the signals around it. If it’s showing up but misrepresented, correct the source.

Over time, the advantage goes to teams that act on these signals consistently — not just the ones that track them.

How to use Google and LLM insights to improve international SEO

7 May 2026 at 16:00
How to use Google and LLM insights to improve international SEO

Many companies expand internationally by duplicating their U.S. website, translating the language, and keeping the same architecture, navigation, and content structure across markets.

Then performance drops. International versions may convert at half the rate of the original site or struggle to gain traction altogether.

The issue usually isn’t translation. It’s assuming users in different markets search, navigate, and evaluate information the same way.

Using insights from Google SERPs and LLMs, here’s how to localize website architecture and navigation for international SEO.

How to use Google to localize content

Google’s SERP interface is localized for individual markets. Each element — menu order, topic filters, questions, tags, AI structures — reflects learned user behavior.

For example, if you search for a topic or product in the UK and Italy, you’ll get different interfaces: The Italian site might show two shopping options, while the UK site puts images at position two. These aren’t arbitrary — they’re algorithmic predictions based on observed behavior in each specific region.

Google has already done the user research. You just have to extract the signals systematically. Every SERP element is optimized through behavioral data, for example:

  • Menu order reflects click-through analysis across millions of users.
  • Topic filters represent observed refinement patterns.
  • People Also Ask (PAA) boxes aggregate real user confusion points.
  • Image tags cluster search behavior patterns.
  • AI Overviews encode entity relationship patterns that a model has learned.

9 signals to create a localization framework

Use these nine SERP interface elements to contain localization intelligence.

  • Menu order/filters reveal primary and secondary search intent. They are localized and dynamic — their order changes due to seasonalities, changes of intent, content behaviors, and breaking news.
  • Topic filters show hierarchical refinement patterns (2-3 levels deep). They are influenced by trends and seasonalities, and Google mixes classic search topics with shopping filters.
  • People Also Ask (PAA): Three levels are enough for discovering patterns and recurring entities through clustering.
  • People Also Search For (PASF) are similar to PAAs but are related searches showing journey connections. In this case, a three-level depth is sufficient to obtain meaningful data.
  • Image search tags for entity search: Each tag is also an entity related to the searched entity, or an attribution of that entity. They place entity associations in a visual search context.
  • AI Overview fan-outs are AI-predicted follow-up questions from Google.
  • AI Mode fan-outs are conversational search path predictions, ideal for exploring entities and triplets.
  • Google web guides are pillar pages that break down a topic into subtopics. It’s ideal for understanding how Google reasons around a subject.
  • Multi-LLM comparative analyses examine how ChatGPT, Gemini, and Perplexity structure their answers. LLM answers help identify both the universal semantic core shared across regions and the region-specific entities that emerge when prompted with local context. This reveals which entities matter globally versus locally.

Table of nine localization framework signals

SignalWhatWhyHow to (manual)How to (with tools)
1. Search Menu OrderReveals primary and secondary search intentMenu position shows how Google classifies query intent per marketOpen incognito browser, set location to target city, search query, record visible menu items in exact orderBrightLocal for location simulation
2. Topic FiltersShows hierarchical refinement patterns (2-3 levels deep)Maps directly to content hub organizationScroll below search bar to “Refine this search” section, document filter chips, click each to reveal sub-levelsTopically.io, Chrome DevTools (inspect filter elements), Python/Selenium for automation
3. People Also AskUser confusion points and anxiety aggregated from real searchesDirect blueprint for FAQ sections and pillar page H2 structureLocate PAA box, document visible questions, click each to expand and reveal related questions (2 levels deep), use incognito to avoid personalizationAlsoAsked.com (visualizes PAA trees), ValueSERP API, SerpAPI for automation
4. People Also Search ForJourney paths and related searches showing sequential behaviorReveals related entities users expect to find connected; informs internal linkingScroll to bottom of search results, document 8-12 related searches shown automaticallyTopically.io, Semrush (“Related Keywords”), Ahrefs (“Also talk about”), SerpAPI
5. Image Search TagsEntity search associations (visual and general); multi-word tags reveal co-occurring entitiesTag frequency = entity salience; informs which entities need visual contentClick Images tab, observe tag chips below search bar, document all visible tags (8-15), note multi-word tagsTopically.io, SerpAPI (image search with tags), Selenium scripts
6. AI Overview Fan-OutsGoogle’s AI-predicted follow-up questions; entity relationships the model learnedSpecifically informs Google AI Overview, AI Mode, and Web Guide structure; shows content sequencing for user journeyN/AQforia by iPullRank, Gemini API with Python/Colab
7. AI Mode Fan-OutsConversational search path predictions; multi-turn journey Google anticipatesReveals complex topic exploration paths; growing importance as Google pushes AI Mode heavilyN/AQforia by iPullRank, Gemini API with conversational context in Python/Colab
8. Google Web GuideGoogle’s editorial content organization; H2-level structure Google considers comprehensiveDirect blueprint for navigation structure (not URL paths); categories reveal information types users needPerform search, look for “Web Guide” or “Guide” SERP feature (appears ~20-30% of queries), expand sections, document H2 headingsN/A (no tools available)
9. Multi-LLM Comparative AnalysisHow ChatGPT, Gemini, Perplexity structure answers to identical queries; consensus vs. unique entitiesConsensus entities = must-have content; weak/incomplete answers = information gain opportunities; validates citation-worthy contentEnter identical query in each LLM interface, copy full responses, document response length/format/entities/citations (for Perplexity), perform in local language per marketOpenAI API (ChatGPT), Google Gemini API, Perplexity API – all via Python/Colab for batch processing and entity extraction

Scaling with international SEO

Here’s an example of a product breakdown between international sites:

  • 148 products × 6 query variants = 888 queries
  • Four markets = 3,552 combinations
  • Nine signals = 31,968 data points

However, you don’t need all 31,968 data points. Patterns emerge across 15 to 20 products, roughly 10% to 15% of the catalog. Entity relationships repeat across product categories, so sampling 15 products across factions can reveal critical localization patterns.

How to transform data into taxonomy

Let’s say there’s a hypothetical website based on the Star Wars movies called “SWLegion.com,” which sells tabletop wargaming miniatures. It has several products across factions, eras, and types.

Below is SWLegion.com’s complete URL structure across four markets.

CategoryU.S. (root)UK (/en-gb/)Italy (/it-it/)Spain (/es-es/)
STORE HOME/store//en-gb/store//it-it/negozio//es-es/tienda/
TYPE OF UNIT CATEGORIES
Accessories/store/accessories//en-gb/store/accessories//it-it/negozio/accessori//es-es/tienda/accesorios/
Battle Force Packs/store/battle-force-packs//en-gb/store/battle-force-packs//it-it/negozio/pacchetti-forza-battaglia//es-es/tienda/paquetes-fuerza-batalla/
Battlefield Expansions/store/battlefield-expansions//en-gb/store/battlefield-expansions//it-it/negozio/espansioni-campo-battaglia//es-es/tienda/expansiones-campo-batalla/
Commander Expansions/store/commander-expansions//en-gb/store/commander-expansions//it-it/negozio/espansioni-comandante//es-es/tienda/expansiones-comandante/
Core Sets/store/core-sets//en-gb/store/core-sets//it-it/negozio/set-base//es-es/tienda/sets-basicos/
Operative Expansions/store/operative-expansions//en-gb/store/operative-expansions//it-it/negozio/espansioni-operative//es-es/tienda/expansiones-operativas/
Personnel Expansions/store/personnel-expansions//en-gb/store/personnel-expansions//it-it/negozio/espansioni-personale//es-es/tienda/expansiones-personal/
Starter Sets/store/starter-sets//en-gb/store/starter-sets//it-it/negozio/set-iniziali//es-es/tienda/sets-iniciales/
Unit Expansions/store/unit-expansions//en-gb/store/unit-expansions//it-it/negozio/espansioni-unita//es-es/tienda/expansiones-unidad/
Upgrade Expansions/store/upgrade-expansions//en-gb/store/upgrade-expansions//it-it/negozio/espansioni-potenziamento//es-es/tienda/expansiones-mejora/
FACTION FILTERS
Shadow Collective/store/shadow-collective//en-gb/store/shadow-collective//it-it/negozio/collettivo-ombra//es-es/tienda/colectivo-sombra/
Mercenaries/store/mercenaries//en-gb/store/mercenaries//it-it/negozio/mercenari//es-es/tienda/mercenarios/
Galactic Empire/store/galactic-empire//en-gb/store/galactic-empire//it-it/negozio/impero-galattico//es-es/tienda/imperio-galactico/
Galactic Republic/store/galactic-republic//en-gb/store/galactic-republic//it-it/negozio/repubblica-galattica//es-es/tienda/republica-galactica/
Rebel Alliance/store/rebel-alliance//en-gb/store/rebel-alliance//it-it/negozio/alleanza-ribelle//es-es/tienda/alianza-rebelde/
Separatist Alliance/store/separatist-alliance//en-gb/store/separatist-alliance//it-it/negozio/alleanza-separatista//es-es/tienda/alianza-separatista/
TYPOLOGY FILTERS
Heroes/store/heroes//en-gb/store/heroes//it-it/negozio/eroi//es-es/tienda/heroes/
Varies/store/varies//en-gb/store/varies//it-it/negozio/varie//es-es/tienda/varios/
Infantry/store/infantry//en-gb/store/infantry//it-it/negozio/fanteria//es-es/tienda/infanteria/
Tools/store/tools//en-gb/store/tools//it-it/negozio/strumenti//es-es/tienda/herramientas/
Vehicles/store/vehicles//en-gb/store/vehicles//it-it/negozio/veicoli//es-es/tienda/vehiculos/
ERA FILTERS
All Eras/store/all-eras//en-gb/store/all-eras//it-it/negozio/tutte-ere//es-es/tienda/todas-eras/
Age of Rebellion/store/age-of-rebellion//en-gb/store/age-of-rebellion//it-it/negozio/era-ribellione//es-es/tienda/era-rebelion/
The New Republic/store/the-new-republic//en-gb/store/the-new-republic//it-it/negozio/nuova-repubblica//es-es/tienda/nueva-republica/
Fall of Jedi/store/fall-of-jedi//en-gb/store/fall-of-jedi//it-it/negozio/caduta-jedi//es-es/tienda/caida-jedi/
Reign of the Empire/store/reign-of-the-empire//en-gb/store/reign-of-the-empire//it-it/negozio/regno-impero//es-es/tienda/reino-imperio/
CONTENT SECTIONS
Lore Section/lore//en-gb/lore//it-it/lore//es-es/lore/
Rules Section/star-wars-legion/rules//en-gb/star-wars-legion/rules//it-it/star-wars-legion/regole//es-es/star-wars-legion/reglas/
Mini Painting Academy/mini-painting-academy//en-gb/mini-painting-academy//it-it/accademia-pittura-miniature//es-es/academia-pintura-miniaturas/
About Us/about-us//en-gb/about-us//it-it/chi-siamo//es-es/sobre-nosotros/

Extract entities across signals

Using the above product catalog as an example, use each product as a query seed.

Start manual, with 10-15 products to internalize patterns. Then automate with APIs/Python, and store in a CSV/JSON. Cross-reference entities to identify co-occurrence patterns.

Combine all nine signals into a unified dataset. Then, extract entities mentioned across signals.

Weighted co-occurrence analysis

Track which entities appear together across signals. This reveals which concepts users naturally connect in their thinking.

Each signal has a different reliability weight based on how directly it reflects user intent:

  • LLM mentions: 3.0 (high confidence — models trained on usage patterns)
  • Query fan-outs: 2.5 (AI predicts relationships from observed behavior)
  • PAA: 2.0 (actual user questions connecting entities)
  • PASF: 2.0 (sequential journey connections)
  • Image tags: 1.5 (visual/entity search context)
  • Topic filters: 1.0 (broad categorization)

For example, say there’s a significant variation in entity relationship complexity across markets, measured as total weighted co-occurrence scores (sum of all entity pair connections, weighted by signal reliability):

  • U.S.: 2,639.5 total weight
  • UK: 2,359.0 total weight
  • Spain: 2,266.0 total weight
  • Italy: 1,084.5 total weight

This means the U.S. and UK show 2x more entity relationship complexity than Italy, indicating more complex user journeys requiring deeper content architectures.

Cross-market entity patterns

Not all entities matter equally across markets. Your content strategy depends on recognizing three distinct patterns:

  • Universal entities (all four markets): These appear consistently across the U.S., UK, Spain, and Italy. Users everywhere expect this content.
  • Market-specific: These entities show concentrated interest in just one market based on current signal validation. Cover these entities deeply in their market of reference but maintain lighter coverage in other markets. In future quarterly re-analysis, verify if interest for these entity types has increased in other targeted markets to determine whether to expand coverage depth accordingly.
  • Regional (2-3 markets): These entities appear in most but not all markets, requiring selective deployment. Build content, deploy to 2-3 markets, and evaluate ROI before expanding.

Ontology pattern recognition

Beyond individual entities, track how different types of entities connect. This reveals what content formats work in each market.

Entities cluster into four categories: 

  • Products (actual sellable items)
  • Lore (Star Wars universe entities)
  • Rules (game mechanics)
  • Painting (techniques and processes)

Cross-ontology co-occurrence reveals which content types users expect:

  • When products and lore entities appear together frequently across signals, users think in terms of narrative context for purchases:
    • Product × Lore = Battle scenario content (example: “AT-ST” + “Battle of Hoth” = Hoth battle guide)
  • When products and painting entities co-occur, users research techniques for specific models:
    • Product × Painting = Unit-specific technique guides (example: “Clone Trooper” + “blue markings” = 501st painting tutorial)
  • When painting and lore entities connect, users want thematic aesthetic guidance:
    • Painting × Lore = Themed painting content (example: “terrain” + “Scarif” = tropical planet terrain tutorial)
  • When lore entities cluster together, users compare or navigate between story elements:
    • Lore × Lore = Era/faction comparisons (example: “Clone Wars” + “Galactic Civil War” = timeline guide)

Market-specific pattern differences

These ontology patterns vary dramatically by market, revealing which entities matter, how users think about connections, and how to optimize internal linking architecture. Here’s an example weighted co-occurrence analysis

USA: Product × Lore, weight 60.0 (highest of any market)

  • What this means: American users discover products through lore narratives — build battle scenarios linking story to miniatures.
  • Internal linking strategy: From the “AT-ST Walker” product page, prominently link to /lore/battle-of-hoth/ with anchor text emphasizing narrative context (“Deploy the AT-ST in the iconic Battle of Hoth”). From lore pages, link back to related products within battle scenario descriptions.

UK: Painting × Lore, weight 15.0 (unique to UK and U.S. only)

  • What this means: British users want battle-themed painting guides — content like “Paint a Hoth snow base” works here but is less relevant elsewhere.
  • Internal linking strategy: From /mini-painting-academy/snow-base-tutorial/, link to /lore/battle-of-hoth/ and to relevant product pages like “Snowtrooper Unit Expansion.” Create bidirectional links between painting techniques and the lore/battle contexts where those techniques apply.

Spain: Product × Lore, balanced at 27.0 each

  • What this means: Spanish users balance story interest with product focus — equal emphasis needed.
  • Internal linking strategy: Moderate internal linking between product and lore pages. From “Luke Skywalker Commander” product page, include links to both /lore/luke-skywalker/ and related products. Avoid over-emphasizing either connection type.

Italy: Product × Lore weight 10.5 (weakest)

  • What this means: Italian users don’t connect lore to products — skip elaborate battle scenarios. Focus on product specs and painting basics.
  • Internal linking strategy: Minimize product-to-lore internal links. From product pages, prioritize linking to /mini-painting-academy/ tutorials and related products by faction or unit type. Keep lore pages separate from product discovery paths.

Get the newsletter search marketers rely on.


How to validate your framework

Entities should appear in 3+ signals to be validated. One appearance could be an anomaly or noise.

False-positive check

Signals reveal what users reference, not always what they want. For example, a site appears across multiple markets in various signals, so it’s confirmed as a universal entity in LLM responses across all markets. But its presence in Image Search tags is minimal.

  • Interpretation: Users ask about the site as a reference point but aren’t searching for images of its products extensively.
  • Strategy: Build a comparison article/FAQ, not extensive image galleries or deep informational content.
  • Validation question: Does the signal show what users want or what they’re using for context?

Coverage gap analysis

For example, let’s say signal validation reveals dramatically different entity landscapes across markets — in other words, how many distinct, validated entities appeared in 3+ signals per market:

  • U.S.: 31 entities
  • UK: 28 entities
  • Spain: 29 entities
  • Italy: 16 entities

Italy has half the entity coverage of other markets, revealing a fundamental difference in how Italian users approach this product category — a strong strategic signal. 

If Italian users show concentrated interest in fewer entities, with heavier emphasis on foundational questions (for example, PAAs) rather than deep entity exploration, they’re asking, “what is this?” and “how does this work?”

There’s an information gain opportunity here: While competitors might translate all 31 US entities to Italian, creating shallow content Italian users don’t need, you can dominate the 16 entities that actually matter to this market with comprehensive, beginner-focused content.

Actions to take:

  • Italy needs foundational 101-level content rather than deep entity exploration.
  • FAQ-driven approach matches PAA dominance in Italian signals.
  • Invest in clear product specifications, basic painting tutorials, and simple rule explanations.
  • Build comprehensive coverage of the 16 validated entities before considering the other 15.
  • Monitor quarterly. If Italy’s validated entity count grows, market maturity increases, and expand coverage accordingly.

You’re not trying to force-fit U.S. models onto Italian users, you’re serving the actual information needs for this market.

How to structure internal architecture

Maintain a consistent technical structure across all markets with canonical tags, hreflang, CMS architecture, and analytics.

For the complete structure of the SWLegion.com example, see its full architecture.

Ecommerce section:

  • U.S. (root): /store/, /store/{category}/, /store/{filter}/
  • UK: /en-gb/store/, /en-gb/store/{category}/, /en-gb/store/{filter}/
  • Italy: /it-it/negozio/, /it-it/negozio/{categoria}/, /it-it/negozio/{filtro}/
  • Spain: /es-es/tienda/, /es-es/tienda/{categoría}/, /es-es/tienda/{filtro}/

Content sections:

  • U.S. (root): /lore/{entity}/, /star-wars-legion/rules/{topic}/, /mini-painting-academy/{guide}/, /about-us/
  • UK: /en-gb/lore/{entity}/, /en-gb/star-wars-legion/rules/{topic}/, /en-gb/mini-painting-academy/{guide}/, /en-gb/about-us/
  • Italy: /it-it/lore/{entità}/, /it-it/star-wars-legion/regole/{argomento}/, /it-it/accademia-pittura-miniature/{guida}/, /it-it/chi-siamo/
  • Spain: /es-es/lore/{entidad}/, /es-es/star-wars-legion/reglas/{tema}/, /es-es/academia-pintura-miniaturas/{guía}/, /es-es/sobre-nosotros/

Slug localization:

  • Store slugs fully localized (/store/ → /negozio/ → /tienda/).
  • Content section slugs localized where natural (/rules/ → /regole/ → /reglas/, /mini-painting-academy/ → /accademia-pittura-miniature/).
  • Entity slugs within content localized for official translations (Spain: /es-es/lore/conde-dooku/ vs English /count-dooku/).

What stays consistent

  • Path structure: /lore/, /store/, /rules/ exist everywhere even if entity coverage or category emphasis differs.
  • Product inventory: Physical products remain the same across markets (same 148 SKUs), though merchandising and filtering emphasis may vary.
  • Core navigation sections: All markets have Store, Lore, Rules, Mini Painting Academy, About Us, but internal linking architecture and content depth within each section adapts to market signals.

Entity coverage

Create a master entity list flagged by market validation. This will become your strategic content roadmap, preventing duplication while ensuring comprehensive coverage where it matters.

Entities cluster into two strategic categories:

  • Universal entities validated across all 4 markets: Darth Vader, Luke Skywalker, painting, terrain, miniatures, core factions (Galactic Empire, Rebel Alliance, Separatist) — these form your foundation and users everywhere expect this content.
  • Market-specific entities showing concentrated validation in one or two markets: 501st Legion (U.S./UK only), Shatterpoint comparison (Italy only), Wookiees (Spain only) — these are your localization differentiators.

Phase 1 build: Start with universal entities. Build 12-15 cornerstone pages, translate to all four markets for 48-60 total pages. These establish a baseline coverage across your entire international footprint.

Phase 2 build: Add market-specific entities. Create 25-35 localized pages to be deployed selectively only to validated markets. A 501st Legion deep-dive may go live in the U.S. and UK but not in Italy or Spain.

Total strategic content: 73-95 pages across four markets. This is a better, more refined strategy than covering 148 product entities × four markets, adding lore/rules/painting content for all entities across all markets, which would create dozens of wasted pages. 

How to implement an AI roadmap

Building out your international SEO can present some challenges. Here are some roadblocks and strategies to do it right. 

Implementation challenges

Let’s look at some hurdles to implementing AI to search.

CMS limitations

Most CMS platforms aren’t designed for entity-level localization. What’s needed is conditional page creation based on market validation.

For example: Add a “Target Markets” custom field to your CMS with checkboxes for different markets — U.S., UK, Italy, Spain, in our example. 

Content team scaling

Creating dozens of localized pages requires subject matter expertise, native language writers, and cross-market coordination. 

Start with one market — the second-largest, not the largest, to learn with a lower risk. Build 5-10 entity pages, validate traffic and conversions, and then scale to other markets only when ROI is proven.

Maintenance 

Markets evolve, new products launch, entities gain or lose relevance, and signals need periodic re-analysis. 

Re-run an abbreviated nine-signal analysis on the top 20 entities on a quarterly basis. Look for significant shifts: If entities drop from 3+ signals to one signal, consider deprecating content.

Continuous intelligence systems

Here are some tools to help monitor AI systems:

  • Wikipedia edit monitoring: Create watchlists for 10-15 key entities per market, and set email alerts for significant edits. Major additions or edit wars signal rising interest — if that happens, review entity page content and update accordingly.
  • Reddit velocity tracking: Track comment velocity on entity mentions. Entities mentioned in 5+ threads in one week (an unusual spike) should be investigated. 
  • TikTok and Instagram trends analysis: Monitor trending hashtags and viral content patterns related to your product categories. Rising hashtag usage or viral content patterns can indicate emerging entity interest before they appear in traditional search signals.
  • Google Trends “rising” analysis: Monitor “rising” queries monthly (not absolute volume). Queries with +100% week-over-week growth signal emerging interest. 

Building a roadmap

Now that you know what roadblocks lie ahead, here’s how to implement the plan.

Month 1: Foundation

  • Choose one market for learning and prototyping. Select 10-15 products to sample and conduct a systematic nine-signal analysis.
  • Create an entity list with co-occurrence weights and 3-5 validated market-specific entities.

Months 2-3: Content creation

  • Build universal pillar pages and translate to all markets, and build market-specific entity hubs, starting with one initially. Implement internal linking based on co-occurrence weights.

Months 4-6: Validation and expansion

  • Monitor entity coverage rates, LLM topic visibility, and market-specific traffic growth.

Months 7-12: Full multi-market rollout

  • Expand to all markets. Run continuous intelligence systems, including: Wikipedia watchlists, Reddit monitoring, TikTok/Instagram trends, and schedule quarterly signal re-analysis.

How to measure success

After implementing changes and incorporating AI into your international search strategy, here’s how to determine what’s working and where to improve.

Entity coverage rate

This metric tells you if you’re covering entities that actually matter to users in each specific market, not just translating pages indiscriminately.

  • Formula: (Entity pages built / Total validated entities from signal analysis) × 100
  • Example: Your signal analysis validated 28 entities in the UK (entities appearing in 3+ signals). You built dedicated pages for 22 of these entities. Your entity coverage rate is: 22/28, or 79%.
  • Target: 70%+ coverage for each priority market.

Consider the strategic difference. For example, let’s say your UK site covers 79%, or 22 of 28 validated entities, focusing resources on entities users actually search for, ask questions about, and engage with across multiple signals. 

While a competitor translates 148 product entities, achieving “100% coverage” on paper, but wastes resources covering entities UK users show minimal interest in.

Your 21% gap (6 uncovered entities) isn’t a failure, but a strategic prioritization. 

These lower-priority entities can be added if quarterly re-analysis shows their signal validation strengthening — moving from 2 signals to 3+ or appearing in additional signal types.

Tools for tracking entity coverage:

  • Screaming Frog: Crawl your site and count entity pages by market subfolder.
  • Google Sheets: Cross-reference validated entity lists against live URL inventory.

LLM topic visibility

Track whether your site appears in LLM responses for key topics, not individual citation counts. The goal is to measure topical authority, not vanity metrics.

For ChatGPT/Gemini/Perplexity/Claude: Use WAIKay.io to systematically track your visibility across multiple LLMs. The platform allows you to:

  • Set up monitoring for specific queries across ChatGPT, Gemini, Perplexity, and other AI platforms
  • Track whether your domain appears in responses (mentions, summaries, citations)
  • Monitor visibility changes over time with historical tracking
  • Generate reports showing presence/absence per topic, per LLM

For AI Overviews/AI Mode: Use Semrush One to monitor Google’s AI-powered SERP features. Alternative tools, such as Ahrefs, Advanced Web Rankings, and SISTRIX (AI Overview presence reporting), offer similar capabilities.

Target benchmarks:

  • Universal topics: Visibility in 2+ LLMs across all markets.
  • Market-specific topics: Visibility in 2+ LLMs for a specific market’s language queries.

This validates if your content quality and entity coverage are sufficient for LLMs to consider you an authoritative source worth including in their responses. Lack of visibility signals content gaps or insufficient topical depth.

Incorporate AI and LLMs into your international SEO today

Most international sites treat taxonomy as infrastructure: build once, maintain minimally, and refresh every 2-3 years during a website redesign. 

Our SWLegion.com example started with an identical architecture across four markets. Implementing this strategy, we showed how to localize architecture and navigation and optimize for each market.

This strategy builds something fundamentally different — architecture that breathes with market behavior, responding to signals rather than assumptions. You’re cultivating taxonomy rather than just maintaining a website.

Your new taxonomy will reflect current user behavior and also anticipate and adapt to behavioral shifts before competitors notice that the market has changed.

AI SEO punishes lazy marketing strategies by Brick Marketing

7 May 2026 at 15:00

Over the past few decades, digital marketing has settled into a stable system. While it spans SEO, content marketing, social media, and digital advertising, many programs have relied on a predictable core that didn’t always use every available channel.

This gave digital marketers a sense of predictability and comfort. For years, teams stuck with what worked and refined execution through the same familiar framework. AI search has disrupted that comfort and exposed our inconsistencies. To succeed with AI SEO, we need a much more comprehensive approach.

AI SEO rewards strategic marketing 

Over the past 15 to 20 years, digital marketing settled into a predictable rhythm, with each channel playing a defined role. 

Content marketing, social media, SEO, paid advertising, and email followed similar strategies with little variation. Little happened outside this structure, and many of us grew “lazy.” 

The structure worked, so we let other strategies fall away.

The problem? It created a false sense of security. We should have been doing more all along, and those broader strategies are now driving real visibility in AI search.

AI has disrupted digital marketing in ways that weren’t obvious at first. It’s changed user search behavior and how brands are evaluated. 

Traditional search relied on algorithms and a primary source. AI pulls from multiple inputs across many sources.

Those sources should already exist. They’re your marketing — the way you present your brand across platforms like social media, third-party directories, press releases, brand mentions, and more. In short, anything outside your website.

In this system, your website and the strategic marketing that supports it are just one part of the whole. It’s now one of many sources AI uses to understand your brand and offer. AI search reflects the strength of marketing across all these sources.

Visibility Is not limited to your website

One of the biggest disruptions AI has caused is that the website is no longer central to your marketing strategy or visibility. It’s now part of a much larger ecosystem. You still need a strong website, as always, but you must account for how much broader the landscape has become with AI search.

While driving traffic to your website still matters, it’s no longer the only focus. The goal used to be maximizing website visibility — achieve that, and results would follow. That still works to a degree, but treating it as the only path to visibility is outdated.

AI pulls information from a wide range of sources — articles, brand mentions across platforms, third-party profiles, published content — and all of it shapes how it understands who you are and what you do. 

Your website is just one part of this broader scope. If you focus only on your website, you limit AI’s ability to find you.

This is where most marketing programs fall short, especially those built before AI. To modernize, your brand must be visible across a much wider scope. 

AI SEO requires an intentional presence

AI favors brands that show up online with intent. They’ve built a cohesive ecosystem across the wider internet. 

A segmented marketing approach may have worked in the past, but it no longer has the same impact. We got away with it because when each channel performed well, it still felt effective and met our goals.

AI doesn’t allow this anymore. It favors brands with many connected signals, because it links them across the internet. It evaluates how your brand appears across these sources and looks for consistent messaging and expertise. 

When these signals align, your AI visibility strengthens. When they’re scattered or your broader presence is weak, your AI visibility is weak.

This is why it’s important to develop a marketing strategy that accounts for this. A brand with a coordinated presence across the internet — across its website and other marketing channels — is what’s required today. 

Lazy marketing strategies are exposed

This is the real issue with “lazy marketing.” We define it as sticking to the old approach — treating each channel separately and relying on the same tactics that have always worked. That approach may have delivered results before, but those days are gone.

At the time, this approach still delivered results. A strong SEO foundation consistently drove leads, and paid advertising offered similar predictability. These tactics worked so well that there was little need to go beyond them.

We need to go beyond it to keep up. Your brand needs to show up across multiple sources — that’s how AI finds you. If your competitors are already building their presence, you need to do the same or get left behind. They’ll take more space in AI-generated answers than you.

This means that if you have gaps in your marketing, you can’t hide them anymore. AI exposes these inconsistencies and forces you into the broader digital space.

Transition into the era of AI search 

Now is the time to move beyond the old model and adopt a new understanding of what works in digital marketing. The old approach no longer works on its own — it must be part of a broader system.

These are the strategies we should have been using all along: press releases, directory listings, and marketing beyond your own website.

AI search rewards an all-encompassing marketing strategy because that’s what works. Core channels like social media, SEO, content marketing, and paid advertising still matter, but they’re not enough on their own. 

AI hasn’t changed the rules. It has enforced them.

This is what has always worked in marketing. The difference now is that you can’t get away with doing less.

Microsoft: AI answers need a smarter search index

6 May 2026 at 21:56
Microsoft Bing traditional search vs. grounding systems

The search index is evolving from ranking pages to supporting AI-generated answers. In a technical blog post “on the evolving technical characteristics of the index,” published today, Microsoft Bing explained why AI search needs a different indexing system than traditional web search.

Traditional search vs. grounding systems. Microsoft said traditional search can rely on users to self-correct, while AI systems need stronger evidence because they generate committed answers.

  • Traditional search is built around documents. Users get ranked links, scan the results, and decide what to trust.
  • Grounding systems are built around supportable facts with clear sourcing. The AI uses that information to generate a combined answer, where mistakes can compound across sources and reasoning steps.

They shared this table:

Traditional search vs groundingfor AI responses

What’s different. Traditional ranking is optimized for relevance. Grounding must also assess whether information is accurate, up to date, clearly sourced, and sufficient to support an answer. That means AI indexes need to account for whether:

  • A page’s meaning survives chunking and transformation.
  • The source is clearly identified.
  • The information is fresh enough to use.
  • Important facts are actually retrievable and groundable.
  • Grounding systems need to detect disagreements between sources before generating an answer.

Stale content. Stale content creates a different risk in AI answers, Microsoft said. In traditional search, it may hurt ranking quality. In grounding systems, it can directly generate a wrong answer.

Contradictions. A search engine can rank one source above another and let users decide. Grounding systems must recognize conflicting evidence before turning it into a single answer, according to Microsoft.

Retrieval is more complex. Search is usually a single interaction: query in, ranked results out. Microsoft said grounded AI systems may retrieve information repeatedly, refine based on earlier results, combine evidence, and reassess confidence before answering.

How indexing quality is measured. Search quality has traditionally focused on ranking performance and user behavior. Grounding systems also need to measure factual fidelity, source quality, freshness, evidence strength, and conflict detection. The industry is still learning how to rigorously measure grounding quality, Microsoft said.

Grounding doesn’t replace search. Grounding builds on existing search infrastructure while adding systems focused on evidence quality, attribution, and deciding when an AI system should avoid answering, Microsoft said.

Why we care. For decades, search indexes helped determine which pages users should visit. Today, AI grounding determines which information supports an AI-generated answer. Microsoft described grounding as a new layer on top of traditional search, built for AI systems that need higher confidence in the information they use. That shift could push brands and publishers to focus more on creating information AI systems can confidently use.

The blog post. Evolving role of the index: From ranking pages to supporting answers

Google Analytics Data API adds cross-channel conversion reporting (alpha)

6 May 2026 at 19:16

Google is expanding its Analytics Data API to include cross-channel conversion reporting — giving developers programmatic access to paid and organic performance data.

What’s happening. The new feature, currently in alpha, allows Google Analytics and Google Ads users to pull conversion data across channels via the API — mirroring what’s available in the Conversion performance report in the Analytics interface.

This means developers can now access the same insights without relying on manual reporting.

Why we care. As measurement becomes more complex, advertisers need unified views of performance across paid and organic channels. This update enables teams to automate reporting, integrate data into their own systems and build more advanced analysis workflows.

It’s particularly valuable for businesses managing multiple platforms and looking to centralise performance data.

The caveat. This feature may not be available to every Google Analytics property yet. Google says it is actively working to expand access, and advertisers should check with their support teams to confirm eligibility.

What to watch:

  • When the feature moves beyond alpha and becomes widely available
  • How advertisers use API access to build custom attribution models
  • Whether more reporting capabilities are added to the Data API

Bottom line. By bringing cross-channel conversion data into the API, Google is giving advertisers and developers more control over how they access, analyse and act on performance data.

The latest jobs in search marketing

8 May 2026 at 22:01
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Do you love nerding out on SEO and working with clients? If your friends & family are sick of hearing about your latest search rankings, then we’re your kind of people and you will love this job. $80k in year 1 + potential bonuses You will get an absolute masterclass on SEO and working with […]
  • At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
  • Job Description Attention: Kapitus is aware that individuals posing as recruiters may be communicating with job seekers about supposed positions with Kapitus. Kapitus has received reports that the content and method of communication can vary, but messages may contain requests for payment (e.g., fees for equipment or training) and/or for sensitive financial information. Kapitus will […]
  • Remote (Canada-wide) · Full-time · $75,000–$90,000 CAD About Webserv Webserv is a digital marketing agency that helps mission-driven businesses — particularly in behavioral health — grow through SEO, paid media, and conversion-focused web strategy. We’re a tight-knit team that values curiosity, ownership, and the kind of work that actually moves the needle for our clients. […]
  • The Basics: Growth Plays is hiring a Senior SEO/AEO Manager based in the US, Canada or LATAM, to support and manage ongoing customer engagements and relationships. You’ll act as the main point of contact for your clients, and focus on building relationships and trust while driving strategy-aligned growth for the long term. This role is […]
  • Company: Local Leads DigitalLocation: RemoteJob Type: Contract, 1099Compensation: 100% Commission, Uncapped Job SummaryLocal Leads Digital is hiring an Independent Sales Representative to help grow adoption of the L.O.C.A.L. Tool, our local SEO fulfillment solution. This is a fully remote, 1099 independent contractor opportunity for someone who is confident in outbound sales and comfortable building their […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • This role offers you the opportunity to deepen your SEO expertise and develop your leadership skills within a tight-knit agency team. Sr. SEO Analysts lead our client relationships and bring our outcome-driven strategies to life. They are responsible for delivering value and results to our clients through their high-quality work, commitment to building deep SEO […]
  • Job Description This position is a Full-time remote position The Entrust Group is a pioneer in the world of self-direction. For over 40 years, we’ve provided administration services for self-directed retirement accounts and tax‑advantaged plans. As a self-directed IRA administrator, Entrust enables clients to invest their retirement funds in alternative assets not typically available through […]
  • Company Description At PayNearMe, we’re on a mission to make paying and getting paid as simple as possible. We build innovative technology that transforms the way businesses and their customers experience payments. Our industry-leading platform, PayXM™, is the first of its kind—designed to manage the entire payment experience from start to finish. Every click, swipe […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Description Job Description About the Role As a Paid Media Manager, you will work closely with our cross-functional strategists, overseeing campaigns across various channels with a strong focus on performance marketing across search (SEM), social, and programmatic. You’ll ensure the successful execution of integrated digital marketing initiatives, with hands-on involvement in both Google Ads […]
  • Job Description Job Description Salary: $17 hourly ABOUT LOVESHACKFANCY LoveShackFancy, founded in 2013 by Rebecca Hessel Cohen, is a global fashion, beauty, childrenswear, accessories, home, and lifestyle brand celebrated for its romantic, vintage-inspired aesthetic and cult-like community. Known for its immersive, whimsical interiors, the brand has grown to 24 boutiques across the U.S. and London. […]
  • Job Description Job Description Paid Media Manager Position: Contractor (mid-May to the end of Oct), 40hrs/wk Location: Remote, MT or PT based About the Role We’re looking for a Paid Social Media Manager with deep expertise in TikTok and Meta to join Movement Strategy. This role sits at the intersection of paid media and creator […]
  • Job Description Job Description Who We Are Kargo creates powerful moments of connection between brands and consumers to build businesses. Every day, our 600+ employees work to radically raise the bar on what agentic AI, CTV, eCommerce, social, and mobile can do to deliver unique ad experiences across the world’s most premium platforms. Taking a […]
  • Corcoran Sunshine Marketing Group is seeking a Marketing Specialist to join its New York headquarters, reporting to the VP of Marketing and working closely with both the Marketing and Project teams. This role supports the execution of project‑based marketing initiatives across multiple channels, including print and digital advertising, project‑specific social media, portfolio‑wide and site‑specific events, […]

Other roles you may be interested in

Senior Manager, SEO/AEO, ActiveCampaign (Remote)

  • Salary: $140,500 – $193,200
  • Identify opportunities for technical improvements across the ActiveCampaign website, prioritize them based on their potential business impact, and collaborate with cross-functional stakeholders to implement them.
  • Pioneer LLM optimization and Answer Engine Optimization (AEO) by developing content strategies that ensure ActiveCampaign is the authoritative source material used by LLMs.

SEO Marketing Manager, Care.com (Hybrid, Dallas, TX)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Manager, SEO, KINESSO (Hyrid, New York, NY)

  • Salary: $90,000 – $95,000
  • Manage senior analysts and help analysts grow into the next level of their career.
  • Translate clients’ business goals and marketing objectives into successful search engine optimization strategies.

Senior Marketing Manager, Vanguard Renewables (Remote)

  • Salary: $120,000 – $182,000
  • Work closely with CMO and RNG team to develop and execute a strategic marketing roadmap aligned with business priorities.
  • Serve as the primary marketing liaison for RNG team, acting as the connective tissue between the Marketing and Commercial groups.

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌