Reading view

Reddit introduces collection ads, deal overlays, Shopify integration

Reddit logo displayed on smartphone screen

Reddit is rolling out new Dynamic Product Ad features, including a shoppable Collection Ads format and Shopify integration, the company announced today.

What’s new.

  • Collection Ads: A new Dynamic Product Ad format that pairs a lifestyle hero image with shoppable product tiles in one carousel, bridging discovery and purchase. Early adopters following best practices are seeing an 8% ROAS lift.
  • Community and Deal overlays: Reddit-native labels like “Redditors’ Top Pick” and automatic discount callouts surface social proof and pricing signals without extra work from you.
  • Shopify integration: Now in alpha, this simplifies catalog and pixel setup for new DPA advertisers, automatically matching products to the right users and context.

The numbers. Reddit DPA delivered an average 91% higher ROAS year over year in Q4 2025. Liquid I.V. reports DPA already accounts for 33% of its total platform revenue and outperforms its other conversion campaigns by 40%.

Why now. Reddit has seen a 40% year-over-year increase in shopping conversations. Also, 84% of shoppers say they feel more confident in purchases after researching products on Reddit.

Why we care. The new tools, especially the Shopify integration, lower the barrier to getting started with Dynamic Product Ads. Reddit might still be viewed by some as an undervalued paid media channel, but there’s an opportunity to get in before competition and costs rise.

Bottom line. Reddit is increasingly a serious performance channel for ecommerce, and these tools make it easier to get started. If you’re not yet running DPA on Reddit, the combination of undervalued inventory and improving ad formats makes this a good time to test.

Reddit’s announcement. Introducing More Ways to Tap into Shopping on Reddit

AI citations favor listicles, articles, product pages: Study

AI citation engine

AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.

The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.

  • Articles dominated informational queries, cited 2.7x more than other formats.
  • Listicles captured 40% of commercial-intent citations, nearly double any other type.

Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.

  • Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
  • Commercial queries were led by listicles (40.9%).
  • Transactional and navigational queries favored product and category pages (around 40% combined).

Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.

Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.

Model differences. All models favored listicles, but diverged after that.

  • ChatGPT leaned heavily into articles and informational content.
  • Google AI Mode showed the most balanced distribution.
  • Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.

Industry patterns. Content preferences shifted slightly by vertical:

  • SaaS and professional services over-indexed on listicles.
  • Health favored authoritative articles.
  • Ecommerce spread citations across listicles, articles, and category pages.
  • Home repair showed the most even distribution across formats.

The research. The content types most cited by LLMs

Google is tightening political content rules for Shopping ads starting April 16

Google shopping ads

A quiet but important policy update is coming to Google Shopping ads next month, requiring some merchants to verify their accounts before running ads featuring political content.

What’s changing. From April 16, merchants running Shopping ads with certain political content in nine countries will need to verify their Google Ads account as an election advertiser. Google will also outright prohibit some political Shopping ads in India.

The countries affected. Argentina, Australia, Chile, Israel, Mexico, New Zealand, South Africa, the United Kingdom, and the United States.

Why we care. Shopping ads aren’t typically associated with political advertising — this update signals that Google is broadening its election integrity efforts beyond search and display into commerce formats. Merchants selling politically themed merchandise, campaign materials, or other related products in the affected countries need to act before the April 16 deadline.

What to do now.

  • Review the updated policy language to determine if your Shopping ads feature content that falls under the new restrictions
  • If affected, apply for election advertiser verification through Google Ads before April 16 to avoid disruption to your campaigns

The bottom line. This affects a narrow but specific set of merchants — but the consequences of missing the deadline could mean ads being disapproved or accounts being flagged. If you sell anything with a political angle in the listed countries, check your eligibility now.

ChatGPT citations favor a small group of domains: Study

AI retrieval vs citations

AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.

  • That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old “one keyword, one page” approach.

The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.

  • AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
  • Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation “seats.”

What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT — 3.5x more often than pages beyond the top 20.

  • ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
  • A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.

Why we care. Publishing the “best answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.

The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.

  • This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
  • 58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.

On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.

  • The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
  • Finance had the steepest ramp, with 43.7% of citations in the first 30%.
  • Healthcare and HR Tech were flatter.
  • Education peaked later, around 30% to 40%.

About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.

The study. The science of how AI picks its sources

Google is testing AI-generated animated video clips inside PMax

Google Local Services Ads vs. Search Ads- Which drives better local leads?

A new creative feature has been spotted inside Google Ads Performance Max campaigns — and it could change how advertisers without video budgets approach animated display advertising.

What was found. Vice President of Search at JumpFly, Inc. Nikki Kuhlman spotted an option to generate animated video clips directly within PMax asset groups, using AI to enhance and animate a single source image.

How it works.

  • Upload a source image — a logo, a product shot, a property photo
  • AI generates several “enhanced” versions of that image
  • Each enhanced image produces two animated clips
  • Select up to five animated clips per asset group
  • Note: faces cannot be used in source images, though AI may generate people in enhanced versions

Early results from testing. A logo generated a spinning animation of the image element. A house with a sold sign produced a slow cinematic pan. Simple inputs, but the output quality appears usable for display advertising without any video production required.

Where the ads appear. Google hasn’t provided in-product documentation on placement, but early testing shows animated clips surfacing in Display ad previews when added to an asset group.

Why we care. Video assets continue to be a strong creative option on Paid Media — but producing video has always required time, budget, and resources many advertisers don’t have. This feature effectively removes that barrier — turning a single product photo or logo into animated display creative in seconds, at no additional production cost.

For advertisers who’ve been running PMax on static images alone, this could be a meaningful and easy win.

The bottom line. This feature is still unconfirmed by Google, but advertisers running PMax should check their asset groups now. If it’s available in your account, it’s worth testing — especially for campaigns that have been running on static images alone.

First seen. Kuhlman shared spotting this new feature on LinkedIn.

SEO’s biggest threat in 2026? Your own organization

SEO’s biggest threat in 2026? Your own organization

AI tools and visibility have dominated the SEO conversation in the past two years. But while discussions focus on these new technologies, most of the biggest SEO risks in 2026 will come from somewhere else: within your own organization.

Fragmented data, unclear ownership, outdated KPIs, and weak collaboration can quietly destroy even the best strategies. As SEO expands beyond the website and into AI-driven discovery, the role of the SEO team is becoming broader, more influential, and, paradoxically, harder to define.

Here are some of the risks your team should start thinking about now.

Relying too much on AI for everything

Many SEO teams now rely on AI for everything, from generating briefs to analyzing data. That’s often necessary. You can’t spend hours creating a brief when AI can produce something usable in minutes. But that’s also where the risk starts.

AI can generate content quickly, but “acceptable” won’t differentiate you. You still need a clear point of view — what story you’re telling and what unique angle you bring. Without that, your content becomes generic, predictable, and indistinguishable from competitors using the same tools.

The issue is simple: if you ask similar tools similar questions, you’ll get similar answers. And your competitors have access to the same tools.

Some companies try to stand out by training models on proprietary data. In reality, few teams do this at scale. Most prioritize speed over quality.

There’s also risk in using AI for analysis without understanding the data behind it. AI is fast, but it can misinterpret or hallucinate results.

I’ve seen this firsthand. An AI tool hallucinated part of a calculation during an urgent analysis, making every insight that followed incorrect. It only acknowledged the mistake after it was explicitly pointed out.

More broadly, AI excels at identifying patterns. But in SEO, competitive advantage rarely comes from following patterns. The most effective strategies don’t just mirror what everyone else is doing. Sometimes the best opportunity isn’t the obvious one.

AI is reshaping how SEO work gets done, how impact is measured, and whether it can be measured at all.

Dig deeper: Why most SEO failures are organizational, not technical

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Fragmented data and limited visibility

For years, SEO professionals have worked with incomplete datasets. We’ve never had a full view of the user journey. That’s one reason organic impact has often been underestimated. In the past, though, we could still piece together a reasonably clear picture — from ranking to click to conversion.

Today, that picture is far more fragmented. AI tools have changed how people research and discover products. Users now start in AI assistants – asking questions, comparing options, and building shortlists before ever visiting a website. By the time they land on your page, part of the decision-making process is already done.

The problem is we have zero visibility into that journey. If a user discovers your brand through an AI-generated answer, adds you to a shortlist, then later searches for you directly, the signals that influenced that decision are invisible. We only see the final step.

Microsoft Bing has introduced basic reporting for AI searches, but it’s limited. We still can’t see the prompts behind specific page visibility.

At the same time, SEO teams are still expected to prove impact. Some companies are adding questions to lead forms to understand how users discovered them. In theory, this adds signal. In practice, it depends on accurate self-reporting. I know how I fill out forms, so I question how reliable that data really is. Still, it’s a start.

Setting the wrong KPIs

Fragmented data creates another risk: focusing on the wrong KPIs. Stakeholders still ask about traffic. No matter how often SEO teams explain that its role has changed, traffic remains a default measure of success. For years, organic growth meant more sessions, users, and visits. That mindset hasn’t fully shifted.

At the same time, stakeholders are drawn to newer metrics — AI visibility, citations, and mentions. These aren’t inherently wrong, but they need to be used carefully.

Most tools measure AI visibility using a predefined set of queries. That’s where risk creeps in. Teams can become too focused on improving visibility scores, even if it means optimizing for prompts that look good in reports rather than those that matter to the business.

For example, appearing for “What is XYZ software?” isn’t the same as showing up for “Which XYZ software is best?” The first may drive visibility, but the second is much closer to a purchase decision.

To avoid this, visibility metrics need to be tied to business outcomes — a real challenge given the fragmented data problem.

Tracking AI visibility also opens another rabbit hole: debates over which prompts to track, how many to include, and why. This can quickly overcomplicate measurement, especially if teams lose sight of the goal. The objective isn’t to track every phrasing, but to understand the intent behind it. Trying to capture every variation is impossible.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Owning more than you can actually own

SEO teams are expected to own AI visibility strategy much like they owned SEO strategy. But strategy is often treated as execution.

Even in the past, SEO was never fully independent. It relied on other teams — engineering to implement changes and content to create pages. The difference is that most of this work used to happen on the company’s own website.

That’s no longer true. Visibility in AI answers requires presence beyond your domain — Reddit threads, YouTube videos, and media mentions all play a role.

This significantly expands the scope of work. At the same time, many of these surfaces don’t have clear owners inside organizations. Even when they do, there’s a tendency to assume that if SEO owns the strategy, it should also own execution or at least be accountable for outcomes.

The opposite happens, too. If other teams own execution, they may take ownership of the entire strategy. In reality, neither model works well.

SEO teams can’t manage every platform that influences AI visibility. They don’t have the expertise to produce YouTube content or run PR campaigns. Their strength is knowing what works and helping optimize it. For example, advising on how a video should be structured to perform on YouTube.

Owning strategy also doesn’t mean deciding who owns execution. That’s a leadership responsibility. It requires visibility across teams and the authority to assign ownership. Otherwise, one team is left deciding how its peers should operate.

Get the newsletter search marketers rely on.


Lack of cross-team collaboration

Even when companies recognize the importance of AI visibility, cross-team collaboration remains a challenge.

Roles and processes are often unclear. SEO teams may expect others to execute, while those teams assume it’s SEO’s responsibility. In other cases, teams don’t prioritize AI visibility because their KPIs focus elsewhere.

This is where leadership alignment becomes critical. If AI visibility is truly a strategic priority, it needs to be reflected in goals and KPIs across all relevant teams. When AI-related KPIs sit only with SEO, it creates an imbalance: one team is accountable for outcomes, while execution depends on many others.

Many teams are also unsure how to work with SEO. Some don’t involve SEO early enough. Others choose not to follow recommendations because they don’t agree with them.

SEO teams share responsibility here, too. They need to actively onboard other teams and clearly connect SEO efforts to broader business goals. It’s our job to show that lack of visibility means lost revenue.

I’ve seen cases where teams critical to AI visibility hadn’t even read the strategy document. In these situations, the issue isn’t one-sided. Teams need to understand what’s expected of them, and SEO needs to push for alignment and involve stakeholders early. Simply moving forward without that alignment doesn’t work.

SEO teams also don’t always explain the “why.” AI visibility can end up treated as a standalone SEO metric rather than a business driver. Even when there’s agreement on its importance, a lack of clear processes, shared goals, and training keeps collaboration inconsistent.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Too much strategy, not enough doing

With rapid changes in search, SEO teams often spend more time on theory — reading, analyzing, building frameworks, and refining strategies — instead of making changes to the website.

That doesn’t mean teams should stop learning. Quite the opposite. But strategy without execution quickly loses value. In many organizations, SEO teams are expected to produce in-depth strategy documents meant to align teams and define priorities. In reality, many go unread outside the SEO team. They require significant effort but deliver little impact.

Part of the problem is that strategies are often too theoretical. They explain the why but miss the what. The value of a strategy isn’t the document, but the actions that follow. Other teams need to understand what to do and how to contribute.

AI is also accelerating how quickly search evolves. Waiting months to test ideas no longer works. A more practical approach is to understand the direction, implement changes, observe results, and iterate. Smaller experiments often lead to faster learning.

When SEO succeeds, SEO disappears

SEO has always been a consulting function. Success depends on collaboration with teams like engineering, content, and product. Today, that dynamic is more visible than ever. In many cases, SEO teams don’t execute directly. Their role is to enable others.

In mature organizations, this works well. Collaboration is strong, and credit is shared. SEO’s consulting role is recognized without forcing the team to own areas outside its expertise. In less mature environments, it can lead to SEO being undervalued or seen as unnecessary.

AI adds another layer. It can generate keyword ideas, outlines, and optimization suggestions, making SEO look deceptively simple, much like writing content. AI lowers the barrier to entry, but it doesn’t replace expertise. Without that expertise, teams produce work that’s technically correct but average.

It’s a familiar pattern: copy-pasting a Screaming Frog SEO Spider error list into a task doesn’t demonstrate real understanding. This creates a paradox. The more SEO becomes a company-wide capability, the more the SEO team risks becoming invisible.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

SEO is evolving, but are companies ready?

SEO teams won’t fail in 2026 because of a lack of knowledge. They’ll fail if they can’t turn that knowledge into action, influence, and business impact.

The challenge is no longer just optimizing pages. It’s building processes, partnerships, and measurement models that reflect how visibility works today.

Success also depends on leadership support. Many of the biggest risks are structural — fragmented data, unclear ownership, weak collaboration, outdated KPIs, and the gap between strategy and execution.

AI visibility expands beyond the website and into the broader organization. That doesn’t make SEO less important, but it does make it harder to define, measure, and defend.

The companies that succeed will stop treating SEO as a traffic function and start treating it as a business capability that drives visibility, discovery, and growth.

Apple is bringing ads to Apple Maps this summer

Apple

Apple is preparing to introduce sponsored listings in Apple Maps, marking a significant expansion of its advertising business beyond the App Store.

How it will work. According to Bloomberg’s Mark Gurman, the system will function similarly to Google Maps — allowing retailers and brands to bid for ad slots against search queries. Sponsored businesses will appear in Maps search results, much like sponsored apps already appear in App Store searches.

The timeline. An announcement could come as early as this month, with ads beginning to appear inside Maps as early as this summer across iPhone, other Apple devices, and the web version.

Why Apple is doing this. Advertising is a growing and high-margin revenue stream for Apple’s services business. Maps — with its massive built-in user base across Apple devices — is a natural next step, particularly as location-based advertising continues to grow.

Why we care. Apple Maps has a massive built-in user base across iPhone and Apple devices, and users searching within Maps are expressing clear, high-intent signals — they’re actively looking for somewhere to go or something to buy. This opens up a brand new location-based advertising channel that previously didn’t exist on Apple’s platform, giving local businesses and retailers a way to reach those users at exactly the right moment.

Advertisers already running Google Maps or local search campaigns should pay close attention, as this could quickly become a significant complementary channel.

The privacy angle. True to Apple’s form, a user’s location and the ads they see and interact with in Maps are not associated with their Apple Account. Personal data stays on the user’s device, is not collected or stored by Apple, and is not shared with third parties.

How to access it. Businesses will be able to access a fully automated experience for creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options.

What you need to do now. When Apple Business becomes available in April, businesses will need to first claim their location on Maps apple before ads become available this summer — so the time to get set up is now, not when the auction opens.

The bottom line. Apple Maps ads should open up a high-intent, location-based channel that hasn’t existed before on Apple’s platform. Advertisers running local or retail campaigns should claim their Maps listing now and start planning budgets for a summer launch. Early entrants in a new ad auction typically benefit from lower competition before the market matures.

Update 10:45 ET: Apple has officially confirmed that ads are coming to Apple Maps this summer, as part of a broader new platform called Apple Business launching April 14.

Bing Webmaster Tools now links AI queries to cited pages

AI connection map

Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.

Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand — not guesses.

The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:

  • Click a grounding query to see which pages are cited
  • Click a page to see which grounding queries drive its citations
  • Mapping is many-to-many: one query can map to multiple pages, and vice versa

Catch up quick. Microsoft launched the AI Performance report in Bing Webmaster Tools in February as its first GEO-focused dashboard. It:

  • Tracks where and how often your content is cited in AI answers across Bing, Copilot, and partners.
  • Shows grounding queries, cited URLs, and visibility trends over time.
  • Focuses on citation visibility — not clicks, rankings, or traffic.

What they’re saying. Microsoft said the update responds to “strong positive customer feedback and numerous requests.”

The announcement. The addition of query-to-page mapping to Bing Webmaster Tools appeared in a Microsoft Advertising blog post: The AI Performance dashboard: Your view into where your brand appears across the AI web

The entity home: The page that shapes how search, AI, and users see your brand

The entity home- The page that shapes how search, AI, and users see your brand

The entity home is the single page that anchors how algorithms, bots, and people understand your brand. It’s usually your About page, and it does far more than most teams realize.

It’s where algorithms resolve your identity, where bots map your footprint, and where humans verify trust before they convert. In one test, improving that page alone lifted conversions by 6% for visitors who reached it. The reason is simple: the human and the algorithm are doing the same job — checking claims, validating evidence, and deciding whether to trust you.

For years, this was overlooked. Most SEOs focused on rankings and traffic while underinvesting in the page that defines what their brand actually is. That’s no longer sustainable. The entity home is the foundation of how your brand is interpreted across search, AI, and what comes next.

What the entity home isn’t

Before going further, here are four misreadings worth pre-empting.

Not a ranking trick

Getting the entity home right doesn’t produce a traffic spike next Tuesday. It builds the confidence prior that compounds through every gate of the pipeline over time.

Not just schema

Schema markup helps the algorithm read what is already there. It isn’t a substitute for the claims, the evidence links, and the consistent positioning that schema describes. Schema without substance is a well-formatted, empty declaration.

Not always the About page

For most companies, it is, and for most individuals, it is a page on someone else’s website. The right URL to use carries the clearest identity statement, the strongest internal link prominence from the rest of the site, and the most stable long-term address (something people often don’t think about).

Not enough without corroboration

The entity home is where you declare your claims. Independent third-party sources confirm and corroborate your claims. The algorithm will only cross the confidence threshold when what you say matches what the weight of evidence supports.

Three audiences, one anchor — and most brands are ignoring two of them

The entity home serves three simultaneously, through three completely different mechanisms. Most brands haven’t yet given them enough thought.

The three audiences your entity home serves
  • Bots use the entity home when mapping the digital footprint. They use it to establish what entity they are dealing with and how to interpret every corroborative source they find. 
  • Algorithms anchor their identity resolution against it, checking confidence at every relevant gate against whatever baseline this page set. 
  • Humans reach for it when they want to see a resource that feels authoritative precisely because it is structured to inform rather than to sell.

So, the entity home webpage is vital to all three audiences — bots, algorithms, and humans: it sets the tone for the bot in DSCRI, the algorithms in ARGDW, and for the person who converts.

The entity home is just one page, and that isn’t enough

The entity home anchors everything: the canonical URL where the algorithm initializes its model of the brand, where bots orient themselves, and where humans arrive to verify their instinct. One page, doing one critical job. But one page declares. It doesn’t educate.

The entity home website educates. Every facet of the brand structured across pages that give the algorithm a complete picture of:

  • Who this entity is.
  • What it does.
  • Who it works alongside.
  • What it has produced.
  • Where independent sources confirm what the brand claims about itself. 

The difference between the two is the difference between introducing yourself and making your case.

Search built the web around a single assumption — the human acts. The engine organized, the website presented, and the human chose. That model shaped 30 years of architecture decisions because the website’s job was to win the human’s attention and trust once the engine had delivered them to you.

But assistive engines broke that assumption. They took on the evaluation work the human used to do: reading, comparing, synthesizing, and recommending. The human still makes the final call, but the website needs to have made its case to the algorithm before the human ever arrives. 

The audience that matters first has shifted, and a website that speaks only to humans is already losing the conversation that determines whether those humans show up at all.

Agents go one step further. The agent researches, decides, and acts. The human receives the outcome. The website that wins in an agentic environment isn’t the one with the most compelling hero section — it’s the one the agent can read, trust, and act on without inferring anything.

All three modes co-exist, and all three always will. 

  • Search serves the window shopper. 
  • Assistive engines serve the human who wants a recommendation without doing the research. 
  • Agents serve the task that can be delegated entirely. 

What shifts over the next three years isn’t which mode exists — it’s which mode does the most work, and what your website needs to do to win each one.

This is where I’ll plant a flag, and you can disagree. All three jobs need attention right now — the percentages below describe where the main focus of your effort sits, not permission to ignore the others. 

The work on assistive and agential is already overdue. The speed of change will probably make these figures look dated in a few months.

Focus weighting by year- Search, assistive, agential
  • 2026: Search 60%, Assistive 35%, Agential 5%
    • Search still drives most conversions. But the 35% on assistive isn’t optional, it’s late. The brands that started two years ago are already compounding.
  • 2027: Search 35%, Assistive 50%, Agential 15%
    • Assistive engines will be handling enough upstream evaluation that discovery and correct interpretation become the primary battle. Search remains significant. Agential execution is arriving.
  • 2028: Search 20%, Assistive 45%, Agential 35%
    • Agents execute. The algorithm’s confidence in your brand determines whether you’re in the consideration set before any human is involved. Search and assistive don’t disappear — they become the infrastructure the agential layer runs on.

The entity home website anchors all three eras. What changes is who it speaks to first, and what that conversation needs to contain.

Entity home (one page) vs entity home website (full education hub)

Each cluster in that diagram declares something: these satellite pages, grouped this way, belong to this entity and describe one specific dimension of what it is. 

  • /social names the platforms the brand controls. 
  • /peers places the entity in its professional network. 
  • /companies closes the relationship loop between person and organization. 

The grouping carries meaning — an algorithm that reads the structure learns something the individual pages couldn’t tell it separately.

The entity home website has three jobs

Search, assistive, and agential engines co-exist, which means the entity home website runs three distinct jobs simultaneously. 

  • The search job is the one 30 years of practice has refined, and it doesn’t change: get the bots through the DSCRI infrastructure gates cleanly, so the ranking engine delivers the right humans to you, and your content draws them through the funnel with clarity, credibility, and a path to conversion.
  • The assistive job is the one most brands are ignoring, and where the competitive gap is opening fastest: educate the algorithms. Your entity home website structures your brand’s story so algorithms understand it without guessing, and your content wins the competitive phase (ARGDW) with the highest possible confidence intact. Every explicit link from your entity home website to a satellite property declares a graph edge, carrying higher confidence through the pipeline than any connection the algorithm has to infer for itself.
  • Hardest to prepare for, and already arriving: brief the agents. Agentic engines don’t read your website the way a human reads a marketing page — they read it the way an instructed system reads a briefing document, scanning for structured, unambiguous, machine-interpretable facts. Don’t make the machine use imagination it doesn’t have.

Get the newsletter search marketers rely on.


Entity pillar pages solve the identity problem keyword cornerstones were never built for

SEO has always known what to do with a topic: build an authoritative page around it, link it well, and earn rankings. That architecture works because the ranking engine evaluates content.

What it can’t do is tell the algorithm who the entity behind that content is, what relationships it has built, what it has demonstrated over time, or why it should be trusted to recommend rather than merely rank.

An entity has facets, and facets aren’t the same thing as topics. A person isn’t “SEO consultant” plus “technical SEO” plus “keynote speaker”: those are keyword clusters, useful for ranking, useless for identity.

What the algorithm actually resolves identity against is the network of dimensions that define what this entity is — the companies it belongs to, the peers it works alongside, the publications it has appeared in, the expertise it has demonstrated over years, the events it speaks at, and the work it has produced.

An entity pillar page is the authoritative page on your own property for one of those dimensions.

  • The /expertise page establishes demonstrated knowledge in a specific domain, not as a content topic, but as an identity declaration.
  • The /peers page places the entity in a professional network the algorithm already trusts.
  • The /companies page closes the loop between person and organization.
  • The /press page links to independent coverage that corroborates the entity’s claims, giving the algorithm something to cross-reference rather than take on faith.

These pages aren’t traffic pages in the traditional sense, and that framing matters: SEOs who measure them against keyword rankings will consistently underinvest in them because the return doesn’t show up in rank tracking. The return shows up in what AI assistive engines say about your brand when your prospects ask.

Keyword cornerstones vs entity pillar pages

Keyword cornerstone pages and entity pillar pages serve different audiences, and your website needs both

The keyword cornerstone page and the entity pillar page aren’t competing strategies: they’re parallel architectures serving different audiences, which means your website needs both, and the question is how to build them so they compound each other’s value rather than compete for the same resource.

The coincidence between them is real and worth engineering deliberately. The expertise page that ranks for “technical SEO audit” can also function as the entity pillar page that declares this entity’s demonstrated knowledge in that domain if it’s built with that second function in mind:

  • Explicit entity statements.
  • Schema that names the relationships rather than just the topic.
  • Links to corroborating third-party sources stable enough to persist across years.
  • A URL structure that commits to the identity dimension rather than the keyword cluster.

When those two requirements align, one page does both jobs, which is a good thing.

When they diverge: when the page that captures search traffic can’t easily carry the identity declaration without sacrificing one function for the other, you face an architectural choice, and making that choice consciously rather than defaulting to the keyword model is the skill the transition requires.

The percentages already told you the weighting: Both layers are required starting today

Earlier in this article, the 2026/2027/2028 split put search at 60%, then 35%, then 20% of focus. What those numbers don’t say, but what the logic demands, is that the other percentage — the assistive and agential share — needs your website to feed them right now. Don’t wait until the balance shifts.

Keyword cornerstone pages feed the search share. Entity Pillar Pages feed the assistive and agential share.

If you build the Entity Pillar Pages in 2027 when assistive engines truly dominate, you’ll be building into a window that has already closed for the brands that started in 2025, because the algorithm’s model of your entity solidifies around whatever you gave it during the period it was actively learning.

The percentages describe where the demonstrable value sits at each stage. Your investment needs to precede the moment your boss sees the results, not follow it.

Both architectures are required today; the balance shifts, but the requirement for both never goes away.

Building for machines and humans simultaneously is cheaper than building for each separately

The risk brands hear when they encounter the machine-optimization argument is a false trade-off: build for machines at the expense of humans, strip the warmth from the copy, replace narrative with structured data fields, and turn the About page into a schema exercise. You can absolutely avoid the trade-off in practice because the best practices are more complementary than they might appear.

Clear entity statements that help the algorithm resolve your identity also help the human visitor understand immediately who they’re dealing with. Explicit links to corroborating third-party sources that build algorithmic confidence also give the human prospect the independent validation they’re quietly looking for. Schema markup that declares relationships for machine consumption gives structured clarity that human scanners doing final due diligence actually appreciate.

For me, this is the reframe that makes the whole project manageable: my approach to the entity home website is your current marketing, restructured to serve three audiences simultaneously, not a technical infrastructure project running alongside it. One investment that has three returns, and (when done right), the requirements pull in the same direction more often than they pull apart.

The funnel is moving inside the assistant.

When an assistive engine names your brand, summarizes it, and links to it in response to a user query, a conversion event has happened that you don’t see in your Analytics dashboard, and the human who arrives at your website has already been half-sold by the algorithm before they clicked. Traffic will decline as more of that evaluation work moves upstream, and the brands that measure only what arrives at the site will systematically underestimate both the value they’re generating and the gaps in their strategy.

Start measuring where your brand appears in assistive engine responses, how consistently it appears, and what the algorithm says about you when it does.

Getting the entity home right requires definition, proof, and a sustained corroboration campaign

Start with the entity home page itself: choose the single URL that functions as the canonical anchor for your brand’s identity and commit to it. Don’t discover it by asking an AI engine what it thinks your entity home is, because the engine will tell you what it has already learned, and that might be your website homepage, Wikipedia, a press profile, or a LinkedIn page you half-filled in five years ago. You choose it, then you verify the algorithm has learned the lesson you are giving it. You are the adult in the room.

Five criteria determine that choice, in order of weight:

  • The most explicit identity statement on the property.
  • The strongest internal link prominence from the rest of the site.
  • The best-structured schema markup with a stable @id.
  • The clearest outbound links to corroborating third-party sources.
  • The most stable long-term URL.

If your About page doesn’t hit all five, it isn’t doing the job the algorithm requires.

Invest in your About page. Strengthen it with a clear entity statement, schema with a proper @id, verified links to Wikipedia and Wikidata where they exist, every accurate sameAs declaration you can support, and the claims that define your brand’s positioning.

Declaration vs corroboration - claim vs evidence

That single page is the anchor.

The entity home website is the education hub built around it: every entity pillar page you build — /expertise, /peers, /companies, /press — extends the identity declaration outward, giving the algorithm more dimensions to resolve against and more facets to cross-reference with independent sources. Each of those pages does for one identity dimension what the About page does for the whole: declares something specific, verifiable, and machine-readable about who this entity is.

The practical work on the entity home website side is the same audit applied at scale: for each entity pillar page, ask whether it declares a clear facet, links to corroborating evidence, and carries schema that names the relationship rather than just the topic. The pages that answer yes to all three are doing both jobs simultaneously — identity infrastructure and keyword architecture. The ones that don’t need a decision: extend them, or build the pillar function its own dedicated page.

If you’re unsure how much influence you actually have over what AI communicates about you, the answer is more than most people assume — and the channels that give you the most leverage are exactly the ones entity pillar pages are built to activate.

Then force the corroboration loop across the whole footprint: drive independent third-party sources to reference, link to, and echo the claims the entity home makes and the facets the pillar pages declare across enough independent contexts that the algorithm’s confidence crosses from hedged claim to corroborated fact. 

That crossing doesn’t happen on a deadline and can’t be engineered in a sprint. The corroboration loop is the curriculum, slow by design, compounding with every cycle, never truly finished. It is the work, and it rewards the brands that start it today over the ones that plan to start it when the percentages shift.

This is the sixth piece in my AI authority series. 

Why better signals drive paid search performance

Why better signals drive paid search performance

In an increasingly automated environment, paid search performance is constrained by a simple reality: Algorithms can only optimize toward the signals they’re given. Improving those signals remains the most reliable way to improve results.

That sounds straightforward, but in practice, many people are still optimizing around signals that don’t reflect real business outcomes.

Let’s dive into how algorithms function, how you can influence them, and where some people fail.

How bidding algorithms actually work

Modern bidding systems are often described as “black boxes,” suggesting they operate mysteriously. But that description isn’t helpful.

At a high level, bidding algorithms are large-scale pattern recognition systems.

Early automated bidding used simple statistical methods, including rules-based logic and regression models. Over time, these evolved into more advanced machine learning approaches using decision trees and ensemble models.

Eventually, these became large-scale learning systems capable of processing thousands of contextual and historical inputs. The technology has developed significantly, but the goal has stayed remarkably consistent.

Today’s systems evaluate signals such as query intent, device, location, time, historical performance, and user behavior, updating predictions continuously and adjusting bids in near-real time.

Despite this complexity, the underlying mechanisms haven’t changed:

Bidding algorithms identify patterns tied to a desired outcome, estimate that outcome’s probability and expected value for each auction, and adjust bids accordingly. They don’t understand business context or strategy — they infer success from feedback. This distinction matters.

When the feedback loop is weak, noisy, or misaligned with real business value, even advanced algorithms will efficiently optimize toward the wrong objective. Better technology doesn’t compensate for poor inputs.

Dig deeper: Bidding and bid adjustments in paid search campaigns

The signals advertisers can influence

Paid search algorithms observe a vast range of signals, many of which are inferred by the platform and not directly controllable by you. These include user intent signals, behavioral patterns, and competitive dynamics.

While many signals sit outside of our control, there’s still a meaningful set of levers you control that shape how algorithms learn. These include:

These inputs shape how the algorithm explores and learns. They help define the environment in which optimization occurs. But they don’t, by themselves, define what success looks like. That role is played by conversion data.

Dig deeper: Conversion rate: how to calculate, optimize, and avoid common mistakes

Conversion data: The most important signal

When performance plateaus, the first instinct is to blame structure, budgets, or creative. In reality, the biggest lever you have available usually sits elsewhere: conversion data. 

In most accounts, conversion data is the most influential signal you control. It defines the outcome the algorithm is trained to pursue and directly informs prediction models, bid calculations, and learning feedback loops.

When conversion setups are misaligned, overly broad, duplicated, or noisy, platforms still optimize efficiently, just not toward outcomes the business actually values. This is why, at times, you can show improving platform metrics while your commercial performance stagnates or deteriorates.

A common mistake is focusing on increasing conversion volume rather than improving conversion quality. Volume accelerates learning, but if the signal is weak, faster learning just means faster optimization toward a suboptimal goal.

In practice, refining what counts as a conversion often delivers greater performance gains than structural or tactical changes elsewhere in the account.

Dig deeper: Why a lower CTR can be better for your PPC campaigns

Aligning conversion signals with real business KPIs

Before any optimization begins, define what success genuinely means for your business. Paid search platforms don’t have intrinsic knowledge of your revenue quality, profitability, or downstream value. They only see what is explicitly passed back to them.

Misalignment typically appears in predictable forms:

  • Revenue is used as the primary signal when margins vary significantly.
  • Lead submissions are optimized without regard to lead quality or sales outcomes.
  • Short-term efficiency metrics are prioritized over long-term value.

In each case, the algorithm is doing exactly what it has been instructed to do. The issue isn’t optimization accuracy, but goal definition. If an increase in a given conversion wouldn’t be seen as a win by the business, it shouldn’t be the primary signal used for optimization.

Dig deeper: 3 PPC KPIs to track and measure success

Get the newsletter search marketers rely on.


Strengthening conversion signals with richer, more resilient data

Conversion quality is determined by how confidently the platform can identify and interpret a tracked event.

Browser-based tracking alone is increasingly incomplete due to privacy controls, attribution gaps, and fragmented user journeys. As a result, ad platforms rely on a combination of browser-side and server-side data to improve matching and attribution. This means that, for you, this isn’t just a measurement problem, as it directly affects how confidently platforms can learn from conversions.

Stronger conversion signals are typically characterized by multiple reinforcing parameters, including:

  • First-party identifiers, such as hashed personal data passed via enhanced conversion frameworks.
  • Click identifiers that connect conversions back to ad interactions.
  • Transaction or event IDs that prevent duplication.
  • Accurate conversion values.
  • Session- and network-level attributes that improve attribution confidence.

When a conversion can be recognized through multiple mechanisms, platforms can match it more reliably and use it in learning models with greater confidence. This improves reporting accuracy and bidding performance by reducing feedback loop uncertainty.

Dig deeper: How to track and measure PPC campaigns

Choosing conversion goals

Selecting the right conversion goal isn’t a binary decision. It involves balancing several competing factors:

  • Volume: Higher volumes support faster learning.
  • Value accuracy: Closer alignment with business outcomes improves decision quality.
  • Stability: Highly variable values can introduce noise.
  • Latency: Delayed feedback slows learning and increases uncertainty.

Higher-volume, faster conversions often sit further away from true commercial outcomes, while lower-volume, high-quality conversions may better reflect business value but risk data sparsity. The most effective setups acknowledge these trade-offs rather than attempting to eliminate them entirely.

In many cases, the optimal solution involves using proxy or layered conversion goals that strike a balance between learning speed and value accuracy.

Dig deeper: How to use proxy metrics to speed up optimization in complex B2B journeys

Practical examples of selecting and strengthening conversion goals

Ecommerce optimization based on gross margin, not revenue

For ecommerce, optimizing toward order value assumes all revenue is equal. In reality, product margins often vary widely. When revenue alone is used as the optimization signal, algorithms may prioritize high-value — but low-margin — products.

A more effective approach is to optimize for gross margin by passing margin-adjusted conversion values via server-side tracking or offline conversion imports. This allows bidding systems to prioritize your business’s profitability rather than top-line revenue, without exposing sensitive cost data client-side.

Lead generation with long conversion latency

In lead gen models where final outcomes occur weeks or months after the initial click, form submissions alone can provide you with weak signals. They are fast and high-volume, but poorly correlated with revenue.

Introducing lead scoring improves signal quality. Leads can be assigned proxy values based on known attributes and early indicators of quality, such as company size, role seniority, or engagement depth. These values can then be passed back to the platform via CRM integrations or server-side tracking, enabling value-based optimization even when final outcomes are delayed.

Optimizing toward predicted lifetime value

If you’re focused on lifetime value (LTV), there are two viable approaches: 

  • Where LTV can be reliably predicted within a short window after conversion, predicted values can be imported and used directly for optimization. 
  • If early prediction isn’t feasible for you, lead scoring or early behavioral proxies can be used instead.

In both cases, your objective is the same: provide the algorithm with timely, value-weighted signals that correlate strongly with long-term revenue, rather than waiting for delayed outcomes that are too sparse to support learning.

Key takeaways for performance marketers

Modern bidding systems are powerful pattern recognition engines, but their effectiveness is constrained by the signals they receive.

The biggest performance gains rarely come from constant restructuring or tactical tests. They come from improving the clarity, quality, and commercial relevance of your conversion data.

Conversion signals are the most influential inputs you control, and misaligned or low-quality setups will limit performance regardless of how advanced the algorithm becomes.

Regularly audit your conversion definitions and ask a simple question: “Would you genuinely celebrate an increase in this outcome?” If the answer isn’t clear, the signal likely needs refinement.

Improving conversion goals, strengthening signal quality, and balancing volume, accuracy, and latency aren’t optional. They’re among the highest-impact ways to improve paid search performance.

The checks that make or break your next website migration

The checks that make or break your next website migration

Website migrations have a well-earned reputation for going wrong, with even well-planned migrations leading to rankings slipping, traffic dropping, or tracking breaking. But most migration problems come from small oversights rather than complex technical failures.

You can reduce your risk with a staged approach. The checks you complete during staging, on launch day, and in the first few weeks after go-live often determine whether a migration stabilizes quickly or becomes a long recovery project.

Before launch: Catch issues on staging 

Most migration problems should be found and fixed on the staging site. If issues reach the live site, recovery is slower and more uncertain. Set yourself up for success with the following tips:

Keep the staging site private (even from crawlers) 

One common mistake is leaving the staging site publicly indexable. When Google crawls a staging environment, duplicate content can sometimes end up in search results. Rankings can fluctuate, and unfinished pages may end up indexed.

Make sure you have blocked crawlers from staging site or protected it with a password so it remains invisible to search engines until the live launch. 

It’s not just crawlers, either. I’ve seen this happen with ecommerce sites.

Customers found the staging site, tried to place orders, and the process didn’t work. This confused customer service teams, frustrated buyers, and created avoidable pressure internally. 

Take benchmarks 

You want a baseline to help you identify real problems rather than reacting to normal short-term movement.

Record organic sessions, rankings, top landing pages, indexed pages, conversions, and site speed before transitioning to the new site to define the “normal” you will compare the new site to. 

Identify priority pages 

Focus on pages that drive traffic, revenue, or attract links. These pages need extra care during redirect mapping, content review, and testing.

Pay extra attention to internal links, redirects, and URL rules for these pages.

Dig deeper: Website migrations: a plan to keep your traffic and SEO safe

Review templates and content continuity 

Templates control titles, headings, metadata, canonical tags, structured data, copy, and media. If templates break, problems repeat across hundreds of pages. 

Check that:

  • Titles and headings are present and accurate. 
  • Canonical tags use full URLs and point to live pages. 
  • Structured data has transferred correctly. 
  • Copy, images, and internal links are intact.

This step protects more than rankings. It ensures the site still meets user needs and supports conversions.

Make sure canonical tags use full URLs and point to live pages, as explained in Google’s guide on canonical URLs. This simple step can prevent bigger headaches later. 

Be intentional about URL changes

Unnecessary URL changes are a common source of hidden damage. Changes made for design or CMS convenience often introduce risk without a clear benefit. 

Typical issues include: 

  • Adding or removing trailing slashes without a clear rule. 
  • Changing folder structures without reason.
  • Inconsistent use of uppercase and lowercase URLs.

One of the most common causes of duplicate URLs during migrations is inconsistent handling of trailing slashes. URLs with and without a trailing slash are treated as different URLs. Allowing both to resolve can create duplicate content, dilute signals, and complicate crawling. 

It doesn’t usually matter which version you choose, as long as the rule is consistent across the site. During a migration, avoid unintentionally switching between formats without a clear plan and proper redirects in place. 

The same goes for folder structures and capitalization. Don’t change what you don’t need to, and be consistent wherever possible.

In one migration where we were brought in to rescue a site after go-live, every URL gained a trailing slash. Canonical tags only contained paths rather than full URLs, and internal links relied on redirects instead of pointing directly to final URLs. None of the changes were necessary, yet together they slowed crawling, caused confusion, and delayed recovery. 

Map redirects and compile existing ones 

Redirect mapping is one of the highest-risk areas of any migration. Existing redirects should be pulled from the CMS, CDN, Google Search Console, analytics platforms, and backlink tools so nothing is missed. Every legacy URL needs a clear, intentional destination. 

If pages are removed, redirect to the closest relevant alternative. If no equivalent exists, return a 404 or 410. Avoid sending everything to the homepage or top-level categories. 

Aleyda Solis’ guide to SEO for web migrations provides a strong framework for this stage.

Decide what to remove and what to create

Migrations are often seen as a good time to refresh all the content on a site. This can be done if all the stakeholders align, but it should be done methodically.

Remove outdated content carefully. Where gaps exist in the new structure, plan new pages in advance and make sure they are ready to go live when the new site is. This planning avoids lost coverage or weak redirect decisions later. 

Verify Search Console access and settings 

Ensure the site can be verified after launch and that any international or country settings are correct. 

Align stakeholders early 

Pre-launch is also about people. Developers, designers, SEO, and analytics teams need clarity on responsibilities and deadlines. Many migration issues happen through missed handovers rather than a lack of skill. 

In my experience, most migration failures are preventable before launch, when fixes are safer and faster. 

I worked on one migration where SEO was brought after launch. The site launched with broken internal links, missing redirects for high-traffic pages, and inconsistent URL rules. Organic traffic dropped by almost 40% within two weeks, and several priority pages disappeared from search results. All of these issues were visible on the staging site but weren’t reviewed before launch. 

Make the case for SEO to be part of the planning process. It saves time, money, and headaches.

Dig deeper: Website migration checklist: 11 steps for success

Get the newsletter search marketers rely on.


Launch day: Verify everything works on the live site

Launch day is where preparation meets reality, and all teams, including SEO, developers, designers, and analytics, see the results of their planning. What worked on staging must now work on the live site. Even small oversights can immediately affect rankings, traffic, conversions, user experience, and reporting. 

Calm, thorough verification ensures the migration pays off and prevents small errors from becoming lasting issues. Use this list as a starting point:

Test redirects at scale

Spot-checking isn’t enough. Every mapped URL should redirect once and resolve cleanly. Avoid redirect chains and loops. They slow down crawling and delay signal consolidation. 

In another migration we were called in to fix, only the top 50 pages had correct redirects. Thousands of other URLs redirected to the homepage. Rankings dipped, and recovery took months longer than expected. 

Crawl the live site 

Run a full crawl as soon as the site is live. Compare results with the staging crawl to identify differences. 

Look for:

  • Broken links.
  • Redirected internal links.
  • Missing pages. 
  • Server errors.

Check internal links and navigation 

Menus, breadcrumbs, and in-content links should point directly to live URLs. Leaving internal links to rely on redirects increases load and risk. 

Verify on-page SEO and content 

Canonicals or hreflang pointing to staging URLs are a common launch issue. Confirm titles, headings, canonical tags, hreflang, copy, and media all reference the live site. 

Dig deeper: How to run a successful site migration from start to finish

Confirm tracking continuity 

GA4, paid media tags, and social pixels should already be in place before launch. This ensures tracking fires correctly, conversions are measured accurately, and historical data remains intact when the live site goes public. Remember, the staging site should be blocked from crawling or be protected behind a password to prevent test traffic from polluting reporting. 

In one migration, we were asked to review after launch. The domain stayed the same, but a new GA4 property was created during the redesign. Historical data remained in the original property, while new data was collected in the new one, making post-launch comparisons difficult. 

Keeping the same GA4 property preserves reporting continuity, supports confident decision-making, and avoids unnecessary uncertainty at a critical point in the migration. 

Check robots.txt and index controls 

Ensure pages meant to be indexed are accessible and that noindex tags are only used where intended. If you use services like Cloudflare, it’s also important to check that your robots.txt and content signals are configured correctly. 

For example, Cloudflare’s default setting may block AI training access while allowing search indexing. If this isn’t adjusted intentionally, AI models might pull content from third-party sources rather than your site, affecting how your brand is represented in generative AI outputs. 

Submit the XML sitemap 

Submit the live sitemap to Google Search Console to support the discovery of new URLs. 

Review site speed 

Check Core Web Vitals and page performance. A redesigned site can still load heavier assets than expected. Launch day is about verification, not assumption. 

After launch: Monitor and stabilize performance

Even the best-planned migrations can reveal surprises once search engines and real users interact with the site. Small errors that didn’t appear on staging can impact rankings, traffic, and conversions.

Calm, structured monitoring in the days and weeks after launch ensures problems are caught quickly before they affect performance or user experience. Here’s what to keep an eye on.

  • Monitor Search Console closely: Watch for crawl errors, indexing issues, and unexpected exclusions. Patterns matter more than isolated URLs. 
  • Check indexed pages: Expect some movement, but sustained drops can point to redirect or crawl problems. 
  • Track rankings and traffic against benchmarks: Compare performance against your baseline rather than reacting to day-to-day changes. 
  • Confirm redirects still receive traffic: Old URLs can attract users and bots for months. Ensure they continue to resolve correctly. 
  • Recheck site speed under real traffic: Performance can shift once the site is under load. 
  • Audit for follow-up improvements: Once stability returns, review internal linking gaps, missing metadata, and content that did not migrate cleanly. Calm monitoring and clear data prevent small issues from becoming lasting damage.

Dig deeper: Technical SEO post-migration: How to find and fix hidden errors

What normal recovery looks like after a migration

Even well-managed migrations can see short-term movement. Rankings may fluctuate, and traffic may dip before stabilizing. 

If redirects are clean, content is intact, and crawl access is clear, recovery usually follows within weeks rather than months. Ongoing losses usually point to structural issues rather than algorithm changes. 

Knowing when to wait and when to act comes from experience. You don’t want to react too quickly or too late. Keep a careful eye on your analytics, and you’ll develop the expertise over time.

Website migrations succeed when they are planned, tested, and monitored at every stage. A clear focus on pre-launch, launch day, and post-launch checks protects visibility, performance, and confidence across teams. 

When SEO is involved early, and checks are clearly owned, migrations stop feeling like crisis events and become managed change. 

EU signals imminent decision on Google DMA probe

Google vs. publishers: What the EU probe means for SEO, AI answers, and content rights

The EU’s top antitrust enforcer signaled a decision on whether Google is violating the Digital Markets Act is imminent, without committing to a timeline.

What she said. “It will come,” Competition Commissioner Teresa Ribera told Dow Jones Newswires, adding the cases are complex and the commission is committed to decisions based on evidence and fair procedure.

The backdrop. The European Commission launched its probe into Google’s search business in March 2024 under the Digital Markets Act. The commission gave itself a soft 12-month deadline to wrap up — it has already fined Meta and Apple, but Google’s case remains unresolved nearly two years in.

The pressure is mounting. Eighteen lobby and civil society groups wrote to Ribera this month demanding clear remedies and a fine large enough to make non-compliance unprofitable.

  • The groups warned the commission’s credibility is on the line, noting Google controls over 90% of the EU search market.
  • “Every day without a decision is a day that European businesses are systematically disadvantaged,” the letter said.

Why we care. A ruling against Google under the Digital Markets Act could force major changes to how it operates search in Europe — potentially reshaping how ads are served, ranked, and priced in one of the world’s largest markets. If remedies include structural changes to search or ad tech, it could affect campaign performance, targeting, and competition dynamics across the board. If you have European audiences, watch this closely — the outcome could ripple through Google’s global ad ecosystem.

Meanwhile, this week. Ribera is in California meeting Sundar Pichai, Mark Zuckerberg, Sam Altman, and Amazon’s Andy Jassy before heading to Washington, D.C., for talks with the acting head of the Justice Department’s antitrust division.

The big picture. Google isn’t the only one in the crosshairs. The commission has additional open probes into how Google powers AI Overviews and ranks news publishers, and is separately investigating Meta over restrictions on rival chatbots using WhatsApp’s business software.

Bottom line. The EU has been slow to act on Google, but pressure is clearly building. When the decision lands, it could set a significant precedent for how the Digital Markets Act is enforced.

How AI-generated content performs in Google Search: A 16-month experiment

AI content rise and fall

With AI, you can generate dozens (if not hundreds) of articles in hours and publish at scale. But publishing is the easy part. What happens after they go live is what matters.

Together with the research team at SE Ranking, we ran a 16-month experiment to track how well AI-generated content performed on brand-new domains with zero authority.

As you will see, the results are hard to call a success.

Here’s the full story behind our experiment.

Methodology

The goal was simple: test how far AI content — with no human editing, rewriting, or enhancement — could go in search.

How quickly would it get indexed? Could it rank for relevant queries? Most importantly, could it drive traffic?

We started by purchasing 20 new domains with no backlinks, domain authority, brand recognition, or search history.

Each domain focused on a different niche, covering topics such as:

  • Arts & Entertainment
  • Business & Services
  • Community & Society
  • Computers & Technology
  • Ecommerce & Shopping
  • Finance & Accounting
  • Food & Drink
  • Games & Accessories
  • Health & Medicine
  • Industry & Engineering
  • Hobbies & Interests
  • Home & Garden
  • Jobs & Career
  • Law & Government
  • Lifestyle & Well-being
  • Pets & Animals
  • Science & Education
  • Sports & Fitness
  • Travel & Tourism
  • Vehicles & Boats

For each niche, we gathered 100 informational “how-to” keywords—long-tail terms with lower competition.

Each site received 100 AI-generated articles, totaling 2,000 pieces across the experiment.

After publishing, we added the sites to Google Search Console and submitted sitemaps.

From that point on, we left the sites untouched to observe performance over time.

Timeline & key results 

Month 1: indexing and early visibility

About 71% of new AI-generated pages were indexed within the first 36 days. They generated over 122,000 impressions and 244 clicks. Even at this early stage, 80% of sites ranked for at least 100 keywords each.

Months 2–3: growth continues

Cumulative impressions grew to over 526,000, with 782 clicks. Content continued to perform well without backlinks, promotion, internal linking, or additional SEO tactics.

Months 3–6: ranking collapse

By about three months, only 3% of pages remained in the top 100. Early relevance helped pages get indexed and briefly appear in search, but without authority, uniqueness, or E-E-A-T signals, rankings dropped sharply. Google still indexed the pages, but users rarely saw them.

Month 16: long-term stagnation

After over a year, visibility remained low across most sites. Impressions and clicks were minimal, and no site showed meaningful recovery. After the August 2025 Google spam update, pages ranking in the top 100 rose to 20% — up from 3% at six months.

Month 1: indexing and early visibility

Just over a month after publication (36 days), the first results came in — and they were stronger than expected for brand-new sites.

Of 2,000 articles, 70.95% were indexed (1,419 pages). For zero-authority domains, that’s notable, as getting new sites fully indexed is often a challenge. This shows Google is still willing to crawl and index AI-generated content in most cases.

Some sites performed particularly well. Eleven of the 20 domains had all 100 pages indexed.

  • Most were in broad, evergreen niches like Food & Drink, Home & Garden, Jobs & Career, and Lifestyle & Well-being.
  • More competitive or specialized areas, like Ecommerce & Shopping, saw slower indexation, likely due to stricter evaluation.

Along with indexation came early visibility. During this first month, the sites collectively generated:

  • 122,102 impressions
  • 244 clicks

Several niches stood out generating more than 10,000 impressions in the first month alone.

  • Hobbies & Interests: 17,425 impressions
  • Business & Services: 17,311 impressions
  • Travel & Tourism: 13,598 impressions
  • Lifestyle & Well-being: 13,072 impressions
  • Law & Government: 11,794 impressions
  • Games & Accessories: 11,083 impressions
  • Vehicles & Boats: 10,677 impressions

In terms of keyword coverage, many sites performed surprisingly well within the first month. Eight sites ranked for more than 1,000 keywords, while another eight ranked for 100 to 1,000.

Even at this early stage, 80% of sites with fully AI-generated content appeared in search for hundreds or thousands of queries.

Notably, over 28% of ranking URLs were already in the top 100. Within the first month, many pages reached positions where searchers could see them.

Overall, these results show AI-generated content can gain traction quickly—even without backlinks, editorial input, or additional SEO work. In the short term, content alone was enough to get indexed and appear in search.

Months 2–3: growth continues

This early visibility wasn’t short-lived. Over the following weeks, impressions and clicks kept growing as Google Search discovered and tested pages.

By about two and a half months after publication, cumulative results across all sites had grown:

  • Impressions: 122,102 to 526,624
  • Clicks: 244 to 782

Keyword coverage also expanded:

  • 12 sites ranked for 1,000+ keywords (up from 8 in the first month).
  • The remaining 8 sites ranked for 100–1,000 keywords.

This pattern is typical for new sites. When Google finds fresh content that matches real queries, it tests that content across results. Pages appear for related queries as Google evaluates their helpfulness.

That’s what happened here. Even without backlinks, internal linking, or SEO improvements, the content gained exposure because it targeted low-competition queries and followed basic SEO structure.

At this stage, it could look like a strong case for large-scale AI content. The sites were new, the content fully AI-generated, and impressions kept rising.

But the growth didn’t last.

Month 3-6: the ranking collapse

Around Feb. 3, 2025, roughly three months after publication, the experiment hit a turning point.

  • Only 3% of pages remained in the top 100, down from 28% in the first month. 

In practical terms, the content remained indexed but rarely appeared where users could see it.

Early relevance can help pages get indexed and appear in search results for a time. Without stronger signals — authority, E-E-A-T, unique insights — those rankings are hard to sustain.

By the six-month mark, Google Search Console showed the following cumulative totals across all sites:

  • Impressions: 526,624 to 706,328 
  • Clicks: 782 to 1,062

At first glance, these numbers suggest continued growth. But that’s not what happened.

Most activity occurred early. In the first 2.5 months, the sites generated roughly 70% to 75% of total impressions and clicks. Over the next 3.5 months, growth slowed sharply, adding only 25% to 30%.

Month 16: the long-term picture

The experiment ran for over a year to see if rankings would recover.

For the most part, they didn’t.

After the drop around the three-month mark, visibility remained extremely low for the rest of the experiment.

There were a few brief fluctuations. The most notable came in late August 2025.

Starting in August, 50% of sites (10 out of 20) saw a two-week spike in impressions. This closely aligned with the rollout of the Google August 2025 spam update, which began Aug. 26.

However, the boost didn’t lead to a sustained recovery.

Among the sites that saw a short-term lift:

  • Six quickly lost visibility and returned to prior lows
  • Four maintained slightly improved performance, similar to early post-publication levels

Following the update, pages ranking in the top 100 rose to 20% — up from 3% at six months. This remained below the 28% seen in the first month, but the August 2025 spam update appeared to have improved some rankings.

In total, 66.9% of pages were still indexed, up slightly from 61.45% at six months.

The following sites had some of the lowest numbers of indexed pages:

  • Finance domain (9 of 100)
  • Health domain (14 of 100)

This is likely due to their YMYL nature, where Google applies stricter quality and trust standards.

By month 16, cumulative results across all sites were:

  • Impressions: 706,328 to 1,092,079
  • Clicks: 1,062 to 1,381

Most impressions still came from the early growth phase, before rankings dropped.

Why SEO visibility didn’t last

The most obvious explanation is that the content didn’t meet Google’s quality standards — and understandably so.

The 2,000 articles lacked many signals Google uses to assess quality and trust:

  • Authority. No backlinks or external validation. Without these, new domains struggle to compete with established sites.
  • Expertise and credibility. No authors, credentials, or real-world expertise — especially critical in finance, health, and law.
  • Content differentiation. Much of the content resembled what already exists. Without unique insights, pages struggle to stand out.
  • Site structure. No internal linking, topical organization, or clear hierarchy to help Google understand page relationships.

Google can identify AI-generated patterns. Without authority, uniqueness, or supporting signals, early visibility declines.

Bonus insight: how new AI content supports existing pages

In early March 2026, we ran a follow-up experiment, adding new AI-generated content to eight tracked sites.

As of March 13, not all new content has been indexed. However, sites with new content already show a noticeable increase in search impressions.

Interestingly, this lift comes primarily from older posts, not the newly published ones.

For example:

  • Business-focused website (from 458 impressions in February 2026 to 7,750 impressions in March 2026)  – 17x increase.
  • Law-focused website (from 19 impressions in February 2026 to 356 impressions in March 2026)  – 19x increase.
  • Science-focused website (from 34 impressions in February 2026 to 633 impressions in March 2026)  – 19x increase.

This experiment shows that publishing new content—even fully AI-generated—can lift traffic to older pages that had been stagnant for months. Fresh content may signal to Google that the site is active and up to date, giving the site a temporary boost.

However, these are early results and don’t guarantee lasting gains in rankings or traffic.

Key takeaway: AI can speed up content creation, but not replace SEO

The results of this 16-month experiment don’t mean AI content is useless. They show AI alone isn’t enough to drive lasting impact.

Early traffic and impressions may look promising, but without a clear SEO strategy and human guidance, those gains will likely fade within a few months.

Google Ads API to block duplicate Lookalike user lists

In Google Ads automation, everything is a signal in 2026

A quiet but important change is coming to the Google Ads API that will affect how advertisers and developers create Lookalike user lists, especially for Demand Gen campaigns.

What’s changing. Google will enforce a uniqueness check on Lookalike user lists, blocking duplicate lists with the same seed lists, expansion level, and country targeting. Attempts to create a duplicate will return an API error after April 30.

Why we care. If you use automated scripts or third-party tools to generate audience lists, an unhandled error could quietly break your campaign workflows if you don’t update integrations in time.

What you need to do.

  • Audit existing Lookalike lists and reuse ones that already match your intended configuration rather than creating new ones
  • Update your API error handling to catch the new DUPLICATE_LOOKALIKE error code in v24 and above, or RESOURCE_ALREADY_EXISTS in earlier versions

Bottom line. This is a housekeeping change to keep Google’s systems stable, but the April 30 deadline is firm. If you manage campaigns programmatically, treat this as a technical to-do before the end of April.

Google’s announcement. Upcoming changes to Lookalike user lists in the Google Ads API, starting April 30, 2026

ChatGPT ads pilot leaves advertisers without proof of ROI

ChatGPT growth

OpenAI is moving forward with ads in ChatGPT, but early adopters say it isn’t ready for serious performance marketing.

The big picture. ChatGPT’s ad product shares almost no data, lacks automated buying tools, and offers minimal targeting—leaving advertisers with little ability to measure whether their spend is doing anything, The Information reported.

What advertisers are dealing with. SEO consultant Glenn Gabe outlined the issues:

  • No automated way to buy ad space — deals happen over calls, emails, and spreadsheets.
  • No meaningful performance data to evaluate outcomes.
  • Two agency executives told The Information they couldn’t prove the ads drove measurable business results for clients.

Why we care. If you’re considering ChatGPT as an ad channel, the lack of performance data means you’re spending blind — with no reliable way to prove ROI to clients or stakeholders. As OpenAI prepares to scale ads to all U.S. free users, the audience will grow, but measurement tools haven’t caught up. If you jump in now, keep expectations tight and treat it as experimental budget, not a performance channel.

What’s coming. OpenAI told advertisers it plans to show ads to all U.S. users on free and low-cost ChatGPT tiers in the coming weeks — a major expansion. It also advised that performance may improve if you supply more variations of text and visual creative.

The irony. OpenAI builds some of the world’s most sophisticated AI, but its ad reporting tools are stuck in the spreadsheet era.

Bottom line. ChatGPT ads are about to reach a much larger audience, but there’s no way to prove they have value yet. If you enter now, you’re largely flying blind — and paying for it.

Credit. Gabe shared highlights from The Information‘s article (subscription required) on X.

Why zero-click search doesn’t mean zero influence

Why zero-click search doesn’t mean zero influence

In a recent keynote at the Industrial Marketing Summit, Rand Fishkin argued that we’re marketing in a “zero-click world.” His observation captures an important surface-level trend: fewer users are clicking through to websites.

The deeper shift, however, is structural. What has changed is the way information is evaluated, repeated, and trusted across the web — and that’s where many are drawing the wrong conclusion.

As clicks decline, it can look like websites matter less. In reality, their role in shaping what gets seen and trusted may be increasing.

Why ‘zero-click’ discussions often lead to the wrong conclusion

From a traffic perspective, the trend is unmistakable. Clicks are declining in many contexts.

  • Search engines now answer many questions directly on the results page.
  • Social platforms function as discovery engines where people research ideas, products, and services without leaving the platform.
  • AI assistants synthesize answers from across the web before a user ever sees a list of links.

Part of the reason the zero-click discussion resonates so strongly is that it disrupts the way we’ve historically measured visibility. For more than two decades, traffic and click-through rates have served as the primary signals for forecasting performance and evaluating the impact of search.

When answers appear directly in search results, AI summaries, or platform conversations, those interactions often occur outside the analytics frameworks we’re accustomed to using.

The conclusion many draw from this trend — that websites matter less — is an incomplete assessment. The role of websites is changing, but their importance in the information ecosystem hasn’t disappeared. In some ways, it may be increasing.

The reason has to do with how modern information systems determine what to trust. Large language models and AI-driven search interfaces don’t evaluate truth the way humans do. They rely on probabilistic signals drawn from the information available across the web.

When the same message appears consistently across multiple independent sources, the statistical likelihood that the information is correct increases. Visibility in this environment is determined by where information appears.

Dig deeper: Why surface-level SEO tactics won’t build lasting AI search visibility

Fishkin is right about the trend

The fragmentation of discovery is real. Information consumption now happens across many environments: search results, social feeds, community forums, video platforms, and AI interfaces.

Users frequently encounter answers without needing to click a link. 

  • A search result might contain an AI summary. 
  • A product recommendation might appear in a Reddit thread. 
  • A professional insight might circulate on LinkedIn.

From a traditional web analytics perspective, these interactions can appear as lost traffic. However, focusing exclusively on clicks misses the more important question: where does the information itself originate?

The environments where people consume information are expanding, but the underlying knowledge those systems rely on still has to come from somewhere.

Zero-click doesn’t mean zero influence

The critical distinction you need to understand is the difference between traffic and information influence.

  • Traffic measures whether a user visited your website. 
  • Influence measures whether the information you produced shaped the answer someone received.

AI systems don’t generate answers out of thin air. They construct them from patterns learned across the open web.

When an LLM answers a question about a legal issue, a technical concept, or a marketing strategy, it draws on the analysis, explanations, and original thinking that publishers have already placed online.

Even in a zero-click environment, those sources continue to exist. They continue to shape the answers. The difference is that influence increasingly occurs earlier in the information pipeline, before the user even reaches a website.

Fewer clicks don’t mean fewer sources. In practice, it often increases the value of authoritative sources because AI systems depend on them to construct coherent responses. Without expert explanations, detailed analysis, and original insight, there’s nothing for the system to synthesize.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Get the newsletter search marketers rely on.


The role of ‘rented land’

In discussions that follow the “zero-click world” framing, the recommendation is that brands should focus more heavily on platforms they don’t control — social networks, communities, and other forms of “rented land.”

Brands can think of their visibility footprint as two categories of territory: 

  • Owned land, where they control the infrastructure and content.
  • Rented land, where their message appears on platforms they do not control.

Owned land includes assets such as a company website, product documentation, knowledge bases, and other first-party content environments. These are places where a brand controls the structure, the message, and the permanence of the information.

Rented land includes platforms such as LinkedIn, Substack, industry publications, forums, podcasts, and social media environments where the brand participates but does not control the underlying platform.

In an AI-mediated discovery environment, both types of territory matter. Owned land provides the canonical source of information. Rented land distributes that information across the broader ecosystem where AI systems encounter it.

These platforms are powerful environments for discovery, amplification, and conversation. They are often where audiences encounter brands for the first time and where ideas circulate widely. However, they rarely serve as the place where authority itself is established.

Authority tends to emerge from deeper forms of publishing: 

  • Long-form explanations.
  • Original analysis.
  • Research.
  • Consistent demonstrations of expertise over time. 

These forms of content typically live on first-party websites, where ideas can be developed fully and preserved as reference points. Rented platforms still influence how AI systems interpret information, but their role differs from that of first-party publishing. 

When a brand, concept, or explanation appears consistently across multiple environments — first-party sites, industry publications, social platforms, and other third-party mentions — the association between that entity and the idea becomes stronger.

Repeated exposure stabilizes the relationship between the brand and the concepts connected to it. As a result, the likelihood that the brand will be included in an AI-generated answer increases.

Platforms amplify the signal. First-party publishing is where the signal originates.

Dig deeper: How paid, earned, shared, and owned media shape generative search visibility

Why AI often favors primary sources

Another misconception in the zero-click discussion is the assumption that AI systems primarily rely on aggregated or repackaged information. In practice, the opposite often occurs. 

When AI systems generate answers, they frequently rely on sources that provide clear explanations, detailed reasoning, and subject-matter expertise. These characteristics are more common in original publishing than in aggregated content.

Legal blogs, technical documentation, research publications, and expert commentary often perform well in AI citations because they provide usable knowledge. The material contains context, reasoning, and structured explanations that models can extract and synthesize.

Aggregated summaries frequently lack that depth. Without detailed explanation or original analysis, the content provides limited value for AI systems attempting to construct coherent answers.

The result is a quiet shift in visibility. Domains that consistently publish authoritative explanations may become more influential in AI-generated answers, even if traditional click-based metrics decline.

The real shift you should understand

Websites still matter, but their role is changing. They’re no longer just traffic generators.

In an AI-mediated information ecosystem, websites function as knowledge sources, training signals, and citation anchors — where expertise is documented, and ideas originate.

Platforms distribute those ideas, conversations amplify them, and AI systems synthesize them into answers. The source of the underlying knowledge, however, still matters.

The marketing implication is straightforward. Success can’t be measured solely by clicks. The objective is to ensure that credible expertise exists in durable forms that can be discovered, referenced, and synthesized wherever information surfaces — whether in search results, AI-generated responses, or discussions on other platforms.

Content that is clear, authoritative, and genuinely useful will continue to shape the answers people receive. In a zero-click world, influence simply happens earlier in the information pipeline.

Dig deeper: Content marketing in an AI era: From SEO volume to brand fame

Why ‘search everywhere’ is the new reality for SEO

Search everywhere is the real shift in SEO

Most SEO discussions today center on AI — from AI Overviews to ChatGPT and other LLMs — and the concern that they’re taking traffic from business websites, forcing a shift toward GEO or AEO.

For the most part, that concern is valid. AI is reducing traffic for many sites, especially those that rely on top-of-funnel, informational content. But the data suggests AI may not be the biggest shift.

User behavior has been fragmenting across platforms for years, and I see this play out in agency work every day.

Here’s what the data shows about how search behavior is changing across platforms, and why a “search everywhere” strategy matters more than focusing on LLMs alone.

Third-party platforms are encroaching on traditional search

People search TikTok for restaurants, YouTube for tutorials, Reddit for authentic reviews, and Amazon to buy products. In many cases, these platforms are replacing traditional search engines like Google and Bing as the starting point.

This shift isn’t just about behavior — it shows up in traffic, too. Amazon and YouTube still drive far more desktop traffic than ChatGPT, a trend Rand Fishkin recently highlighted.

Recently, I helped run a comprehensive share of voice analysis for a client. The goal was threefold:

  • See which competitors are winning in traditional search across multiple service lines.
  • Find keyword and content gaps.
  • Create a content roadmap based on priority to fill these gaps.

The analysis revealed a lot of helpful data, but one of the most interesting takeaways was that our core competitors weren’t actually our biggest competitors in traditional search. YouTube and Reddit were.

Share of voice - Client example

These platforms rank well in traditional search, take up valuable SERP real estate, and move users away from Google and Bing to funnel them back to their own platforms.

The analysis highlighted a key point: if you don’t focus any effort on these places, you’re not only missing out on visibility in traditional search, but you’re also missing valuable attention when users navigate off Google and start watching videos or reading threads.

And this website isn’t the only one seeing this type of trend. Do this type of analysis yourself, and see who your actual competitors are within traditional search. The answers may surprise you.

Dig deeper: Why social search visibility is the next evolution of discoverability

Third-party platforms can have higher search volumes

As seen above, platforms like YouTube and Reddit are increasingly occupying traditional SERP real estate. But what about searches within the platforms themselves? Depending on the query, there may be far more search volume on these platforms than on Google or Bing.

For example, YouTube dominates in tutorials and “how-to” content. A term like “how to fix a leaky sink faucet” has 15x the search volume on YouTube than it does on traditional search globally.

How to fix a leaky sink faucet - Semrush
Source: Semrush
How to fix a leaky sink faucet - vidIQ
Source: vidIQ

Search volumes are estimates. But if you want to get in front of the right people where they’re searching, any content strategy around a term like this, or a similar topic, must include creating a YouTube video.

Better yet, to be search-everywhere-friendly, create a blog post and embed that video in it.

Dig deeper: YouTube is no longer optional for SEO in the age of AI Overviews

Get the newsletter search marketers rely on.


Third-party platforms are cited more in LLMs

Aside from traditional search and in-platform search, we also know that “search everywhere” influences AI-generated results.

To provide answers, LLMs need content to synthesize. More often than not, that content isn’t coming from business websites, but from third-party sources and social platforms.

AI visibility tools can quickly show businesses the power of search everywhere in relation to citations. Take a look at these examples:

Brand A
Brand B

These are two completely different brands, yet the trends are the same: a very small percentage of citations come from your own website or even direct competitors.

In both examples, almost 90% of citations come from third-party news and online publications, or social and forum platforms like Reddit or Quora.

The takeaway here is that focusing on your own website, in the context of LLM citations, can only go so far. If you want to improve brand sentiment or ensure that information is accurately reflected by AI, it needs to happen in places outside of your direct control.

Dig deeper: SEO’s new battleground: Winning the consensus layer

Start investing in search everywhere today

The competitive landscape is shifting, and many marketers have tunnel vision when it comes to AI. Discovery now happens across a wide range of platforms.

YouTube, Reddit, Quora, and others dominate significant portions of traditional search results and may have far more search activity within their own platforms. When AI systems generate answers, they often pull information from these platforms rather than brand websites.

To win in modern search, you need to understand where your audience is actually searching. That doesn’t stop at Google. It means showing up everywhere that shapes decisions.

AI is squeezing marketing agencies from both sides

AI promised efficiency, but it’s squeezing agency margins instead

The numbers tell a story that most agency owners already know in their gut: AI anxiety is rising fast.

In 2024, 44% of digital marketing agencies viewed AI as a significant threat to their business model. Just one year later, that number jumped to 53%, according to SparkToro’s annual State of Digital Agencies survey of hundreds of agency owners worldwide.

But here’s what makes this particularly painful: agencies aren’t just watching AI disrupt their industry from the sidelines. They’re actively using it themselves, automating tasks, reducing costs, and hoping to improve margins. All while their clients are doing the exact same thing, using AI to justify slashing budgets or bringing work in-house entirely.

It’s a squeeze play from both directions, and agencies are caught right in the middle.

The promise that became a problem

When AI tools like ChatGPT and Claude first exploded onto the scene, many agency leaders saw opportunity. 

Finally, a way to automate the repetitive, time-consuming work that ate into profitability. Content briefs, initial drafts, performance reports, basic ad copy, all could be accelerated or partially automated. The math seemed simple: use AI to do more work with fewer people, pocket the difference, and stay competitive on pricing.

Except clients did the same math — and they reached a different conclusion. When brands can spin up decent content, analyze campaign performance, or generate ad variations with a few prompts, the question becomes unavoidable: why are we paying an agency for this?

“Several services that agencies once charged a premium for are now performed in-house or by automation software,” notes Al Sefati, CEO of Clarity Digital Agency, who’s been vocal about the pressures facing boutique agencies. 

Earlier this year, Sefati had clients “put marketing on pause” despite strong performance metrics. A manufacturing client backed out of a contract entirely due to tariff uncertainty. When budgets get tight, and AI makes certain marketing tasks feel commoditized, agencies become an easy line item to cut.

The margin trap nobody talks about

Agencies adopt AI hoping to increase profits by doing more with less staff. But clients expect the cost savings to flow to them, not the agency’s bottom line.

The result? Shrinking retainers across the board.

SparkToro’s research shows that sales cycles are lengthening, more agencies now report deals taking 7-8 weeks or even 12+ weeks to close, up significantly from 2024. 

Prospects are taking longer to commit because they’re doing their own internal math: “If AI makes this cheaper and faster, shouldn’t we pay less?”

Meanwhile, client expectations haven’t decreased at all. In fact, they’ve intensified.

Progress is no longer good enough. Brands now demand tangible business outcomes, pipeline impact, revenue attribution, and demonstrable ROI on every dollar spent.

So agencies are stuck: use AI to stay efficient and risk commoditizing their own services, or refuse to adopt it and get outpaced by competitors and in-house teams who will.

Dig deeper: Why AI will break the traditional SEO agency model

The junior talent crisis nobody’s preparing for

Perhaps the most concerning finding from the research: 66% of agency owners worry that junior team members will have fewer career opportunities in the future. This goes beyond entry-level headcount to the entire talent pipeline.

Historically, agencies have relied on junior staff to handle the repetitive, foundational work, keyword research, content optimization, reporting, and campaign setup. These weren’t glamorous tasks, but they were essential training grounds. Junior marketers learned the craft by doing the work, eventually graduating to strategy and client leadership.

AI is rapidly automating precisely those tasks. And while that might seem like a net positive for efficiency, it creates a devastating long-term problem: where do future senior strategists come from if there’s no ladder to climb?

The war for senior talent is brutal. Top strategists, creatives, and media planners know their worth and demand premium compensation. Meanwhile, clients push back on fees.

The math doesn’t work unless agencies can maintain lean teams, which AI theoretically enables.

But five years from now, when those senior people retire or move on, who replaces them? If an entire generation of marketers never got hands-on experience because AI was doing the work, the industry risks hollowing itself out.

What AI can’t replace yet

Despite the disruption, there’s a clear pattern in what’s working for agencies weathering this transition.

The research shows that larger agencies (51+ employees) are reporting healthier sales pipelines than their smaller counterparts. Part of this is resources, larger shops have dedicated sales teams, and can absorb economic volatility better.

But there’s something else at play.

Agencies that are surviving, and in some cases thriving, are the ones who’ve stopped trying to compete on execution alone. They’re selling something AI can’t easily replicate: strategic thought, real-world market experience, nuanced storytelling, and intelligent execution tied directly to business outcomes.

“Clients desire teams that really understand their industry,” Sefati observes.

The trend is clear: specialization is no longer optional. Generalist “we do everything” agencies are struggling most. Those with deep vertical expertise, B2B SaaS, financial services, healthcare, and ecommerce, are proving that context and strategic insight still command premium fees.

This matters because AI is phenomenal at pattern recognition and execution within known parameters. But it struggles with the messy, ambiguous work of understanding a client’s competitive position, reading market dynamics, or crafting positioning that actually resonates with a specific audience.

The problem? Many agencies haven’t made this transition yet. They’re still selling and delivering services that feel interchangeable with what AI, or a capable in-house team with AI, can produce.

Dig deeper: What successful brand-agency partnerships look like in 2026

Get the newsletter search marketers rely on.


The uncomfortable truth about commoditization

A few years ago, simply having the technical skill to launch a Google Ads campaign or set up marketing automation gave agencies an edge. That’s no longer true.

As martech platforms have become more complex and AI tools grow faster, more brands have built competent internal teams. The bar for what counts as “differentiated agency value” has risen dramatically.

This is why the sales pipeline data is so revealing. 

  • Only 14% of agencies describe their current pipeline as “very healthy.” 
  • Over half say it’s just “average.” 
  • 32% admit it’s “not good.” 

These numbers have improved marginally from 2024 (when 36% said “not good”), but we’re talking about incremental gains in a fundamentally challenged environment.

Smaller agencies, those with 1-10 people, are hit hardest. They typically lack dedicated sales staff, so business development competes with client delivery for founders’ time. And when budgets tighten, brands consolidate with larger, more specialized agencies that feel less risky.

How your agency can escape the squeeze

Focus on these priorities as client demands rise and margins tighten.

Be honest about what AI has commoditized

Don’t fight AI or pretend it doesn’t exist. Be brutally honest about what AI has already commoditized, and ruthlessly focus on what it can’t replicate.

This means making some uncomfortable decisions now. Stop competing on services that AI handles well enough. If you’re still selling basic content creation, social media management, or standard reporting as core offerings, you’re volunteering to be price-shopped. 

Instead, double down on the work that requires genuine expertise: deep market understanding, strategic positioning, creative concepts that actually move the needle, and the kind of nuanced judgment that comes from having seen what works (and what fails spectacularly) across dozens of client situations.

Lead with AI, don’t hide from it

Change how you talk about AI with clients. Rather than downplaying it or treating it as a threat to hide, lead with it. 

  • “Yes, AI can generate content, and we use it to do that faster and cheaper than ever. But what AI can’t do is know that your competitors just shifted strategy, or understand why your last three campaigns underperformed despite good metrics, or recognize that your messaging is technically correct but completely misses what your audience actually cares about. That’s what you’re paying us for.”

Rethink pricing models

Hourly billing and retainers based on team size are relics of a world where labor hours correlated to value. They don’t anymore. 

Outcome-based pricing, value-based fees, and performance partnerships align agency incentives with client success, and make the AI efficiency gains work in your favor rather than against you.

Rebuild the talent pipeline

Address the junior talent crisis head-on. The agencies that figure out how to train the next generation of strategists in an AI-enabled world, by pairing them with senior experts on high-level work rather than relegating them to tasks AI now handles, will have a massive competitive advantage in five years when everyone else is scrambling for talent.

Dig deeper: How to work with your SEO agency to drive better results, faster

The old agency model isn’t coming back

The data shows 64% of agencies expect revenue growth over the next 12 months. Whether that optimism is justified depends entirely on whether agencies adapt to the new reality or keep hoping the old model comes back. It won’t.

The squeeze is permanent. But there’s a path through it for agencies willing to fundamentally rethink what they sell and how they deliver it.

Will your agency become indispensable because of how you use AI, or get bypassed entirely because clients realize they can do what you do themselves?

Duplicate website stats appear in Google paid search ads

Google Ads may be over-crediting your conversions- A 7-day test tells a different story

A strange pattern has emerged in Google’s paid search results: multiple competing ads display the exact same web statistics, raising questions about a bug or an intentional design shift.

What’s happening. Several paid search ads are showing the same website statistics simultaneously, even though these signals are typically unique to each site. The uniformity makes the data look unreliable, and it’s unclear whether this is a display glitch, a test, or something more deliberate.

Why we care. Trust signals in search ads help users make informed decisions and boost click-through rates by building confidence. If those stats appear identical across competing ads, users may dismiss them as unreliable — undermining the credibility boost you rely on.

What we don’t know.

  • Whether Google is actively testing this or it’s an unintended bug.
  • How widespread the issue is across different search queries or markets.
  • Whether it’s affecting user click behavior or advertiser performance.

No official word. Google hasn’t confirmed or commented on the behavior. Paid media expert and founder Anthony Higman first spotted and flagged the anomaly on LinkedIn.

Bottom line. If trust signals can’t be trusted, they stop serving their purpose. You should watch whether this pattern spreads — or quietly disappears.

Google Ads account suspensions: What advertisers need to know

Google Ads account suspensions- What you need to know

Account suspensions are essential to “maintain a healthy and sustainable digital advertising ecosystem, with user protection at its core,” according to Google Ads.

For advertisers, though, navigating the suspension process can be a minefield. Suspensions can happen suddenly, limit what you can do in your account, and, in some cases, affect related accounts as well.

Here’s what triggers account suspensions, the different types you might encounter, and what to do if your account is flagged or suspended.

Why do accounts get suspended?

Accounts get suspended when Google Ads finds a violation of one of its policies. The platform uses a combination of automated systems and manual reviews when detecting violations.

The process involves reviewing the account and other aspects, including your customer reviews, business practices, and website content.

In November 2025, Google addressed concerns that a large volume of accounts were being unfairly suspended by announcing that it had improved the accuracy of the system

Google says that, by using new processes and AI, it’s reduced incorrect suspensions by over 80% and improved resolution times by 70%, with 99% of suspensions now resolved within a 24-hour window.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

How Google Ads suspends accounts and what happens next

Depending on the violation, accounts may be suspended immediately upon detection. In other cases, advertisers will be given a prior warning of at least seven days before the suspension takes place.

Advertisers will be notified via email, along with a red banner at the top of their Google Ads account. When an account is suspended:

  • Ads will not run.
  • You won’t be able to create any new content, such as ads, ad groups, or campaigns.
  • You can, however, still access the account to review historical data and reports.

In some instances, accounts related or linked to the suspended account may also be suspended, such as linked Merchant Center accounts or those linked to the same manager account. These will be lifted if or when the original suspension is resolved.

Dig deeper: Google Ads’ three-strikes system: Managing warnings, strikes, and suspension

What are the different types of account suspensions?

Not all suspensions are the same. Google Ads groups them into a few main categories, each with different causes and outcomes.

Policy violations

These suspensions are due to violations of Google Ads policy or its terms and conditions. Common examples include: 

  • Inappropriate or restricted content.
  • Issues related to editorial requirements.
  • Misuse of data. 

Egregious violations

These are suspensions that Google Ads deems unlawful or harmful. They typically reflect the overall practices of a business, not necessarily its campaigns or accounts. As such, it’s unlikely that the suspension will be overturned and will probably be permanent.

Common egregious violations include:

  • Circumventing systems.
  • Unacceptable business practices.
  • Malicious software.
  • Counterfeiting.
  • Illegal activities.

Other suspensions

Other reasons why an account may be suspended include:

  • Suspicious payment activity.
  • Unpaid balance.
  • Promotional code abuse.
  • Unauthorized account activity.
  • Failure to meet age requirements.

Get the newsletter search marketers rely on.


What to do if your account is suspended?

What you should do next depends on the type of suspension and what caused it.

Policy violations

If your account has been suspended for policy or terms and conditions violations, you must resolve the issue causing the suspension before submitting an appeal.

The Google Ads help guides contain detailed information on these policies, so make sure you read them thoroughly. Don’t submit an appeal until you’re certain that you’ve made the relevant changes.

For example, if you’ve been suspended for violating editorial requirements, review your ad copy to check for potential issues regarding capitalization, spacing, spelling, and symbols.

If you’re uncertain about the violations that caused the suspension and how to fix them, you can use the account troubleshooter beta to determine what steps need to be taken.

Head over to the Google Ads account suspensions overview page and follow the instructions.
Head over to the Google Ads account suspensions overview page and follow the instructions.

Egregious violations

Egregious violations are treated very seriously. In most cases, the suspension is permanent. However, if you genuinely believe that the suspension is baseless, then you can submit an appeal.

Make relevant changes to your account or business practices before you submit your appeal. This is important because egregious violations only get one chance to submit an appeal. Take the time to review your business practices honestly and make sure you’ve done all that you can to comply.

Unauthorized account activity

In the case of an “Unauthorized account activity” suspension, Google Ads has detected suspicious activity, and your account has been suspended to protect it.

This may be triggered due to recent changes to account access, an unusual increase in your ad spend, or if your ads are sending traffic to unfamiliar destinations.

You will need to:

  • Change your Google account password immediately.
  • Check for any unfamiliar devices signed in to your account.
  • Submit a compromised account form.

Other suspensions

In many of these cases, billing issues cause suspension, so check the billing section of your account. Ensure that billing information is accurate, your payment method is up to date, and recent payments haven’t been declined.

If your account has been suspended for a billing or payment issue, you must fix this within 30 days. You may also be required to complete the advertiser verification program to confirm your identity or business operations.

Verified advertisers show in the Ads Transparency Center, which plays a part in Google’s efforts to build a safe and positive experience.
Verified advertisers show in the Ads Transparency Center, which plays a part in Google’s efforts to build a safe and positive experience.

Best practices for submitting an appeal

While the specific steps you need to take will depend on the type of suspension your account is under and what caused it, there are some best practices for submitting your appeal:

  • Ensure that you’ve submitted your advertiser verification, as this will help the system verify your identity and business authenticity.
  • If you recognize that you’ve made an error, for example, opening a new account for a business when there was already a dormant account created before you joined, be upfront and honest about this information.
  • If you believe that the suspension has been made in error, then provide as much information, evidence, and context as possible.
  • While you’ll have a minimum of six months to submit an appeal, try to resolve the issue and submit your appeal as soon as possible. It can be very tricky to return to an account that was suspended years ago and accurately recall the steps that led to the suspension in order to address them.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

What happens after you submit an appeal

Unfortunately, many advertisers are reporting long wait times to hear back about their appeal. This means that you’ll need to be patient and wait for a response via email.

In the meantime, don’t submit additional appeals. Doing so will not increase the speed at which your appeal is addressed and may result in the suspension of your appeal process for seven days.

If your appeal is accepted and your account is reinstated

You can resume running your campaigns via Google Ads as usual.

Be aware of violating the same policy again in the future. Depending on the type of policy infringement, you may face permanent suspension for repeat violations.

If your appeal is denied

You may be eligible to submit another appeal, but you must make the relevant changes before you do so.

While there is no limit on the number of appeals you can make, if too many appeals have been made, they may not be processed.

For egregious violations

If your appeal is denied and you’re permanently suspended, you’ve been banned from using Google Ads. Creating any new accounts will also result in suspensions.

If you still have funds in your account, you’ll need to cancel your account to receive a refund.

Making sense of Google Ads account suspensions

Account suspensions are designed to help keep advertisers and users safe. They help keep dangerous and malicious activities off the platform, improving the Google Ads experience.

While finding out your account is suspended is frustrating, in most cases, there are steps you can take to resolve the issues behind the violation and have your account reinstated.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • The Lead SEO at Rival Digital is a strategic leader responsible for guiding our SEO team, driving organic growth for our home services clients, and evolving our SEO program. This role involves mentoring the team, implementing effective SEO strategies, and integrating SEO best practices across all operations. The Lead SEO will refine our SEO framework, […]
  • Job Description Invivoscribe is an industry pioneer, dedicated to Improving Lives with Precision Diagnostics®. Invivoscribe has been the global leader in driving international standardization of testing and accelerating patient access to the newest and best cancer treatments for over 30 years. Headquartered in sunny San Diego, California with locations across the world, we offer a […]
  • Position Overview As an SEO specialist, you will be responsible for optimizing our home service clients’ portfolios for search engines and driving traffic to their websites. You will work closely with our chief strategist and content team to develop and implement effective SEO strategies that align with clients’ business objectives, increase brand visibility, and improve […]
  • Company Description August Ash, Inc. exists to drive growth and innovation in every partnership by building and supporting complex website and digital marketing strategies. Guided by our core values of Care, Grow Grit, Good Nature, and Clarity, we guarantee honest answers to tough questions. Summary August Ash is seeking a Senior Digital Marketing Strategist to […]
  • Job Description Job Title: Web Designer & Digital Marketing Specialist Location: Phoenix, AZ / Hybrid Job Type: Full-Time Experience Level: Mid-Level (2–5 Years) About FirstLine Road Solutions Founded in 2022, FirstLine Road Solutions has quickly become the partner, employer, and acquirer of choice in the towing and roadside industry. Today, we support 20 independently operated […]
  • Our Snooze Story We are Snooze, the OG brunch leaders who have never stopped flipping the script on breakfast, powered by culinary creativity, unmatched hospitality, and a passion for our communities. Our Snoozers bring their authentic selves to work every day.  This allows us to serve our Guests through genuine care and radical hospitality. Joining Snooze […]
  • About CompoSecure CompoSecure, a GPGI business (NYSE: GPGI), is the leading manufacturer of Premium Metal Payment Cards and also offers best-in-class Authentication and Digital Asset solutions. The Company’s offerings combine elegance, simplicity, and security to deliver exceptional experiences and peace of mind, enabling trust for millions of people around the globe. For more information, please […]
  • Description: This role can sit in our Hayward, CA, Santa Clarita, CA, or Farmington, MI locations. Job Summary We are seeking a strategic and hands-on Digital Marketing Manager to own and run all aspects of our marketing campaigns from planning through execution and optimization. This role will lead our digital presence across paid, owned, and […]
  • Head of Digital Marketing   About the Company Top-tier organization in the consumer services industry Industry Consumer Services Type Privately Held   About the Role The Company is seeking a Head of Digital Marketing to spearhead the development and execution of comprehensive digital marketing strategies. The successful candidate will be tasked with enhancing brand awareness, […]
  • At MERGE, we are Built Different. We are a marketing and technology agency purpose-built for the intersection of health and wellness—where human impact matters most. By weaving storytelling through technology, we move beyond traditional engagement to Whole Human Marketing. This approach recognizes that humans are multidimensional and complex, and uses AI to ensure every brand interaction […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • The Senior Manager, Paid Search will be the primary architect of our Search marketing strategy. This leader will help build a program rooted in incrementality, omnichannel lift, and algorithmic efficiency. They will lead a team responsible for our Search, Shopping, PMAX and App campaigns, integrating MMM insights into tactical execution, and leading data-driven optimizations to […]
  • The Global Paid Media Specialist is responsible for the strategic execution, optimization, and performance scaling of paid digital campaigns across international markets. This role goes beyond campaign management — it owns multi-country activation strategy, localized messaging alignment, budget allocation across regions, and performance optimization across Google, Meta, and additional digital platforms. It works directly with […]
  • Role Overview As our Creative Strategist, you will be the driving force behind the ideation, strategy, and optimization of paid media creatives across platforms (Facebook, Instagram). You’ll collaborate closely with designers, video editors, copywriters, and media buyers to turn insights into creative concepts that drive customer acquisition, loyalty, and brand affinity. This role requires both […]
  • Are you the type of person who immediately checks out a new product after seeing your favorite influencer showcase it on TikTok, Snapchat, or YouTube? Do you love diving into the world of social media advertising, particularly on Instagram? Are you someone who thrives in a role that blends creativity with data‑driven decision‑making? If so, […]
  • At UnitedHealthcare, we’re simplifying the health care experience, creating healthier communities and removing barriers to quality care. The work you do here impacts the lives of millions of people for the better. Come build the health care system of tomorrow, making it more responsive, affordable and optimized. Ready to make a difference? Join us to […]

Other roles you may be interested in

Digital Marketing Manager 10x Health System (Scottsdale, AZ)

  • Salary: $110,000 – $120,000
  • Measure and report on the performance of all digital marketing campaigns against goals (ROI and KPIs).
  • Document and streamline digital marketing processes to scale the team and improve operations.

Paid Ads/Growth Manager, Robert Half (Hybrid, Atlanta Metropolitan Area)

  • Salary: $65,000 – $85,000
  • Manage, optimize, and scale paid campaigns across Google Ads (Search, Display, YouTube) and Meta Ads (Facebook/Instagram).
  • Continuously refine targeting, bidding strategies, and creative to improve CPL, conversion rates, and overall ROAS.

SEO Manager, Clutch (Remote)

  • Salary: $60,000 – $75,000
  • Execute day-to-day SEO tactics across multiple client accounts, ensuring alignment with predefined campaign objectives.
  • Implement optimization strategies, including technical SEO audits and recommendations.

Marketing Manager – SEO & GEO, Care.com (Hybrid, Austin Texas)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Digital Marketplace Manager, Venchi (Hybrid, New York, NY)

  • Salary: $120,000 – $130,000
  • Define and execute channel-specific and cross-marketplace strategies, balancing brand positioning, commercial performance, and operational efficiency.
  • Manage Amazon advertising across Sponsored Products, Brands, and Display campaigns.

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Note: We update this post weekly. So make sure to bookmark this page and check back.

Google launches Ads DevCast Vodcast for developers

Click fraud in Google Ads: Where exposure rises and how to reduce it

As AI agents reshape how advertising platforms are used, Google is bringing focus toward the developers behind the systems and create content specifically for them.

What’s happening. Google’s Advertising and Measurement Developer Relations team has launched Ads DevCast, a bi-weekly vodcast and podcast hosted by Cory Liseno. The show focuses on technical deep dives across Google Ads, Google Analytics, Display & Video 360 and related tools.

Zoom out. This is a companion to Ads Decoded, hosted by Google Ads Liaison Ginny Marvin, which focuses on campaign strategy. Ads DevCast is explicitly built for developers and technical practitioners.

Driving the news. Episode 1 — “MCPs, Agents, and Ads. Oh My!” — centers on what Google calls the “agentic shift,” where AI agents are becoming primary users of advertising APIs.

Why we care. Ads DevCast gives developers a direct line to the engineers building Google’s ad tools, which should help stay ahead of technical changes, discover new capabilities faster, and build more efficient integrations in an increasingly AI-driven ecosystem.

The big picture. AI is expanding who can work with ad tech systems. Google is seeing a shift from a narrow “Ads Developer Community” to a broader “Ads Technical Community,” where marketers can execute technical tasks without full development cycles.

What’s next. Ads DevCast is a pilot, and Google is collecting feedback to shape future episodes.

Bottom line. Google is positioning Ads DevCast as a tool to give developers a front-row seat to Google’s latest ads innovations, with practical insights to build, test, and adapt faster in an AI-first landscape.

Google tightens rules on out-of-stock product pages

Google Shopping Ads - Google Ads

A new Google Merchant Center update changes how e-commerce sites must handle out-of-stock products, with direct implications for product approvals and ad performance.

What’s happening. Google now requires that out-of-stock products must still display a buy button, but it can no longer be active or hidden. Instead, the button must be visibly disabled and appear grayed out. In other words, users should be able to see the button, but not click it.

This marks a clear shift from common practices where retailers either left the “Add to Cart” button clickable or removed it entirely. Both approaches are now non-compliant.

How it works. In practical terms, the requirement is simple. The buy button must remain on the page, but its functionality needs to be turned off. Typically, this is done by applying a disabled state so the button becomes unclickable and visually subdued.

The catch. The button change is only part of the update. Google also expects clear availability messaging on the product page, such as “in stock,” “out of stock,” “pre-order,” or “back order.” This information must match exactly with what is submitted in the product feed.

Any inconsistency between the page and the feed can lead to disapprovals.

The bigger shift. This update removes a long-standing workaround used by many retailers. Previously, it was possible to keep selling out-of-stock products by leaving the purchase button active. That approach is no longer allowed.

If a retailer still wants to accept orders for unavailable items, the product must now be labeled as “back order.” This status needs to be reflected consistently across both the landing page and the feed.

Bottom line. What looks like a small UI requirement is actually a meaningful policy change. Retailers will need to review how they manage out-of-stock products and ensure their pages and feeds are fully aligned to avoid disruptions.

First seen. This update was spotted by Google shopping specialist who shared the his how to video on LinkedIn.

Dig deeper. About landing page requirements

Google Business Profile tests AI-generated replies to reviews

Google AI reviews

Google is testing AI-generated review replies in Google Business Profile.

Why we care. Responding to reviews can impact conversions and trust. But generic AI replies could be risky and erode trust, especially on negative reviews where authenticity matters most. Response quality matters more than whether a business replies to reviews.

What it looks like. Here’s a screenshot:

The details. Google appears to be rolling out a limited test of Reply to reviews with AI inside Google Business Profile.

  • The feature generates suggested responses to customer reviews.
  • Users can review, edit, and manually submit replies.
  • Availability is inconsistent across accounts and reviews.
  • The feature has been spotted in the U.S., Brazil, and India, but not widely in Europe.

Early behavior. Some users report prompts focused on older, unanswered negative reviews.

  • In at least one test, users could trigger AI responses in bulk.
  • There are conflicting reports on automation — some users say bulk responses still require review; others report fully automated replies can be published without edits.

First seen. The feature was first shared on LinkedIn by Chandan Mishra, a freelance local SEO specialist, and amplified by Darren Shaw, founder of Whitespark.

Google confirms AI headline rewrites test in Search results

Google rewriting titles

Google is testing AI-generated headline rewrites in Search results, describing it as a small, narrow experiment for now.

What’s happening. Google confirmed to The Verge (subscription required) that it’s testing AI-generated titles in traditional Search results, not just Discover.

  • The test is “small” and “narrow,” and not approved for broader rollout.
  • It impacts news sites but isn’t limited to them.
  • The goal is to better match titles to queries and improve engagement, Google said.

One example showed Google replacing original headlines with shorter or reworded versions, sometimes changing tone or intent (e.g., reducing “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to “‘Cheat on everything’ AI tool.”).

Why we care. Google Search is already sending fewer clicks. Now you also have to contend with Google generating entirely new headlines with AI, risking changes to meaning, brand voice, and click-through rates.

Dig deeper. Google changed 76% of title tags in Q1 2025 – Here’s what that means

What they’re saying. Sean Hollister, senior editor at The Verge, wrote:

  • “This is like a bookstore ripping the covers off the books it puts on display and changing their titles. We spend a lot of time trying to write headlines that are true, interesting, fun, and worthy of your attention without resorting to clickbait, but Google seems to believe we don’t have an inherent right to market our own work that way.”

Title links. According to the Google Search Central section on title links, originally published in 2021:

Google’s generation of title links on the Google Search results page is completely automated and takes into account both the content of a page and references to it that appear on the web. The goal of the title link is to best represent and describe each result.

Google said it uses these sources to “automatically determine title links”

  • Content in <title> elements
  • Main visual title shown on the page
  • Heading elements, such as <h1> elements
  • Content in og:title meta tags
  • Other content that’s large and prominent through the use of style treatments
  • Other text contained in the page
  • Anchor text on the page
  • Text within links that point to the page
  • WebSite structured data

What to watch. Google called this one of many routine experiments, but that’s no guarantee it stays small. The Verge noted a similar “experiment” in Discover later became a full feature.

  • Any future launch may not rely on generative AI, but Google didn’t explain how that would work.

Reaction. After seeing this news, Louisa Frahm, SEO director at ESPN, wrote on LinkedIn:

  • “After 10+ years in news SEO, I’ve come to find that a headline is the most prominent element for attracting readers in timely windows, to provide a targeted synopsis that elevates your brand voice. If that vision gets altered and facts are misrepresented, long-term audience trust will be compromised.”

Could AI eventually make SEO obsolete?

Could AI eventually make SEO obsolete?

AI won’t make SEO obsolete, but it’ll change how the work gets done. There’s a growing concern that as AI systems improve, they’ll replace the need for human SEO analysis entirely. Early experiments suggest otherwise.

While AI can assist with technical tasks and even generate usable outputs, it still depends heavily on detailed human input, structured data, and technical oversight to produce meaningful results.

The real shift is toward redistribution. AI is accelerating parts of the workflow, raising the bar for execution, and changing where human expertise matters most.

Why AI hasn’t made SEO obsolete

AI aims to reduce the need for semi-technical expertise. Where data is highly structured (e.g., coding a Python script), it has an advantage.

Even then, human expertise is still required. AI can generate scripts, but without detailed instructions and debugging, the output is often unusable.

Generative AI can produce working functions with strong prompts, but it still “thinks” like a machine. That’s why technical practitioners are best positioned to get the most from it.

Technical knowledge is also required for AI-assisted SEO tasks like generating product descriptions or alt text at scale. Even with tools like OpenAI’s API, you still need to transform and structure data into rich, usable prompts — for example, turning Product Information Management data into prompt-ready inputs.

AI depends on human instructions, and output quality reflects input quality. Thinking in structured terms — IDs, classes, and distinct entities — is key to getting reliable results. It’s what makes the output usable.

That makes prompt creation a critical skill. Employers should factor in technical expertise when using AI to drive efficiency.

However, don’t celebrate too soon.

As AI evolves and absorbs more information, this advantage may be temporary. For now, AI still depends on human expertise to function — which is why SEO isn’t obsolete.

Where AI struggles without human input

Data is both AI’s strength and weakness.

Early generative AI models relied on curated data within their LLMs. OpenAI’s models couldn’t perform web searches up to and including GPT-4. After GPT-4, AI systems began relying less on internal data and more on web searches for fresh information.

Because the web isn’t curated and contains a lot of misinformation, this initially represented a step backward for most AI tools, including ChatGPT and Gemini. This shift also mirrors how traditional algorithms rely on raw information.

This raises a key question: Is more information always better for AI?

The open web contains both empirical data and subjective opinion, and AI often can’t distinguish between the two. Giving it access to uncurated data has arguably caused more errors and issues in its outputs.

Finding the right balance of data remains a challenge. How much data helps or harms performance, and how much curation is needed? While developers continue refining LLMs and connected systems, users still need to load up prompts with as much detail as possible to offset how AI sources and evaluates information.

These limitations highlight a core issue: without structured input and human judgment, AI struggles to produce reliable SEO insights.

Dig deeper: 6 guiding principles to leverage AI for SEO content production

Why full SEO automation is harder than it sounds

Basic AI tools can assist with SEO tasks, but full automation is far more complex than it sounds.

That said, AI platforms and technologies are evolving rapidly. The first wave of this evolution came as organizations began producing AI agent platforms like Make, N8N, and MindStudio.

These platforms provide a canvas for automating workflows, combining inputs, outputs, and AI-driven decision-making. Used well, they can turn from-scratch content creation into structured editorial processes, with efficiency gains that can be significant.

However, applying this to real-world SEO work is where complexity sets in. A full technical SEO audit pulls from multiple data sources and environments — crawl data, browser-level diagnostics, and desktop tools. 

While parts can be automated, stitching everything together into a reliable, end-to-end workflow is difficult and often requires custom infrastructure, API work, and ongoing maintenance.

Even with platforms like N8N, full end-to-end automation of complex SEO tasks remains challenging. Simpler, checklist-style audits can be automated, but deeper, more technical work often needs to be simplified to fit automation — which isn’t advisable.

In practice, fully automating SEO at depth requires tradeoffs — which is why human expertise is still critical.

Dig deeper: AI agents in SEO: A practical workflow walkthrough

Get the newsletter search marketers rely on.


AI tools are advancing — but not replacing SEOs

More recently, there’s been a wave of local AI applications that let you create your own “brain” on a laptop or desktop. These tools are often code editors with support for popular AI models, along with local structures for saving reusable skills, similar to Claude Projects or ChatGPT Custom GPTs.

Tools like Cursor and Claude Code allow you to connect models, generate code, and automate parts of workflows through prompts.

It’s possible to use these technologies to vibecode a system that automates a technical SEO audit. I attempted this. While the capability exists, building a system that matches the depth and quality of a manual audit could take months, especially when handling large volumes of data.

Initial issues included memory limitations, where AI struggled to retain both the data and its detailed instructions. In some cases, outputs were also misweighted — for example, flagging missing H1s as critical despite finding no instances.

These issues could be resolved over time, but they highlight that these tools aren’t automatic shortcuts. Making effective use of them still requires technical expertise, time, testing, and troubleshooting.

They lower the barrier to building AI-driven systems, but they don’t eliminate the need for technical expertise. They simply shift the work.

What would need to change for SEO to become obsolete

For SEO to become obsolete, AI would need to operate independently, reliably, and at scale — without human correction. Generative AI can only act with human input, and it struggles to differentiate between fact and fiction.

Some algorithms have reached their limits in terms of commercial viability. This is arguably why Google is trying to convince us that links are redundant before they truly are.

Consider AI as an evolution of algorithmic output. These systems can attempt to make analytical determinations based on input data. However, the idea that feeding AI more and more data is an unrestricted path to success is already running into significant limitations.

This doesn’t mean technical analysts are entirely safe. Humanity’s ambition for faster, more efficient insights will continue. Initially, AI will be seen as the solution to everything. If one AI falls short, another can critique its results.

However, AI requires significant processing power. The real challenge will be finding the balance between AI and simpler algorithms. Algorithms should handle basic tasks, while AI should be used for analysis and insights.

This balance between AI and algorithmic efficiency is still years — perhaps decades — away. Only then will AI truly test SEO professionals and create the potential for redundancies.

AI’s learning is hindered by the web’s misinformation, providing SEO professionals with temporary insulation. This advantage won’t last forever, but it offers a valuable head start.

Dig deeper: How AI will affect the future of search

AI adoption won’t make SEO obsolete overnight

There are also limitations tied to how society adopts AI. Many technological innovations — like the internet and the calculator — were initially considered “cheating.”

Calculators were banned from exam rooms, and the internet was seen as a shortcut compared to traditional research. Yet those perceptions didn’t last.

Most technologies, despite rapid advancement, aren’t adopted quickly due to cost and social factors. We value human perspective and often resist tools that threaten how we think or work.

The main barrier to AI replacing us is how we perceive it. As long as it’s seen as a threat to our ability to provide, it won’t fully replace human roles. That perception, however, will change over time.

As these technologies become normalized, adoption will follow. Governments will adapt, and expectations around human creativity will continue to evolve.

Algorithms and Google didn’t end human interaction on the web, and AI won’t eliminate contributions from people. In the medium to long term, adaptation is inevitable.

SEO and AI: Technical expertise still matters

  • AI integration with SEO: Contrary to fears, AI won’t make SEO obsolete. Instead, it will reshape how SEO is practiced. AI can automate routine tasks like generating product descriptions and alt text, but its effectiveness still depends on precise, technically sound input.
  • Importance of technical expertise: The ability to craft detailed, technically sound prompts is becoming more valuable. This ensures AI tools are used effectively and reinforces the role of experienced SEO professionals.
  • Data sensitivity in AI performance: AI performance varies significantly depending on the data it processes. Systems using curated datasets behave differently from those relying on open web data. This highlights the importance of data strategy and structured oversight.
  • Evolving roles in SEO: As AI advances, SEO roles are shifting. Professionals are more likely to focus on managing AI systems and refining outputs rather than being replaced by them.
  • Societal acceptance and adaptation: Widespread adoption of AI in SEO depends on how quickly society embraces these tools. As normalization and regulation evolve, so will the role of SEO professionals.
  • Future outlook: Despite AI’s capabilities, the creative, strategic, and complex aspects of SEO still require human insight. The future of SEO is a collaboration between human expertise and machine efficiency.

Dig deeper: How to start an SEO program from scratch in the AI age

Cloudflare CEO: Bots could overtake human web usage by 2027

AI vs human internet traffic

AI bots could outnumber humans on the web by 2027, according to Cloudflare CEO Matthew Prince, as agent-driven browsing explodes alongside generative AI adoption.

  • Prince made the prediction at SXSW, warning that bots are already reshaping how the internet is used — and how it’s monetized.

Why we care. Search is shifting from human clicks to AI-generated answers. If bots become the web’s primary “users,” you’ll need to reshape your strategy to ensure AI systems can access, trust, and use your content.

The details. Prince said AI agents generate far more web activity than humans because they gather information differently. A person shopping might visit five sites. An AI agent could hit thousands.

  • “If a human were doing a task… you might go to five websites. Your agent… will often go to a thousand times the number of sites.”
  • “So it might go to 5,000 sites. And that’s real traffic, and that’s real load.”

He also noted the web’s baseline is shifting fast.

  • “For a long time, the internet was about 20% bot traffic.”
  • “We suspect that in 2027 the amount of bot traffic online will exceed the amount of human traffic.”

Prince said this growth isn’t spiking like COVID-era traffic. It’s rising steadily with no end in sight.

Between the lines. Prince compared AI to past shifts like mobile and social. The difference: users may no longer visit websites directly. Instead, they rely on AI interfaces that aggregate and answer.

  • “The business model of the internet was… create content, drive traffic, and then sell things… That was the business model.”
  • “That breaks down because… bots don’t click on ads.”
  • “Customers are trusting the output from the helpful robot. They’re not clicking through the footnotes.”

AI sandboxes. AI agents also change how computing works behind the scenes. Prince described a future where “sandboxes” — temporary environments for AI agents — spin up and shut down instantly, potentially millions of times per second.

  • “You can… as easily as you open a new tab in your browser… spin up new code which can then run and service the agents.”
  • “We think that there will be literally millions of times a second these sort of sandboxes… being created… and then torn back down.”

The result: sustained pressure on internet infrastructure.

  • “We’re seeing internet traffic grow and grow and grow. And we don’t see anything that’s going to slow it down or stop it.”

The business impact. Companies are already split on how to respond to AI agents. Prince pointed to diverging strategies across major retailers.

  • “There are three radically different strategies about how they are going to interact with the bots.”

At the core is a bigger risk: losing the customer relationship.

  • “The nature of bots is going to be that it disintermediates the relationship between you and your customer.”
  • “Agents… don’t care about brand.”

For publishers. Prince argued AI could both hurt and help media. While AI reduces direct traffic and breaks ad-based models, AI companies need unique, original data — especially local and hard-to-replicate information — and may pay for it.

  • “Traffic has always been a really bad proxy for value.”
  • “What they actually want is… unique local interesting information they can’t get elsewhere.”

He pointed to local media as an example.

  • “If you don’t have the Park Record, then you don’t get that information.”
  • “We may make more off licensing our content to AI companies than we do off digital advertising.”

For small businesses. Prince was more blunt. AI agents optimize for price, quality and efficiency — not brand loyalty or proximity.

  • “My bot doesn’t care.”
  • “My bot is going to figure out actually who is the best… and route that traffic.”

That could erode traditional advantages.

  • “The shortcuts of trust that small business had in the past… are going to be much more difficult.”
  • “The natural tendency of AI is towards that level of aggregation.”

What to watch. The next phase of the web will hinge on control and compensation. Prince said:

  • “There has to be some exchange of value.”
  • “We’ve got to figure out… what’s going to pay for it.”

Prince said the core question is still unresolved:

  • “What is the future business model of the internet?… I don’t know what it’s going to be, but it’s going to change.”

The SXSW interview. The Internet After Search

💾

Matthew Prince says AI bots could soon surpass humans, driving massive traffic surges, breaking ad models, and reshaping search.

SEO’s new battleground: Winning the consensus layer

SEO's new battleground- Winning the consensus layer

You could be ranking in Position 1 and still be completely invisible.

I know that sounds counterintuitive. But here’s what’s actually happening:

A potential customer opens ChatGPT or Perplexity and asks, “What’s the best [tool/agency/platform] for [your category]?” Your competitor gets mentioned. You don’t. Your No. 1 ranking did absolutely nothing to help you.

This is the new SEO reality, and it’s catching many smart marketers off guard.

LLMs synthesize consensus across multiple sources, rather than relying on a single source. This means you need corroborating mentions distributed across the web. The game has shifted from ranking to consensus, and if you don’t understand that difference, you’re already losing ground.

Let me break down what’s actually happening and, more importantly, what you can do about it.

From rankings to consensus: What changed and why

Traditional SEO had a clear logic: rank high, get clicks, drive traffic. In this retrieval-based system, Google found pages and users chose which ones to visit.

AI-driven search doesn’t work that way. Systems like Google’s AI Overviews, ChatGPT, and Perplexity are now constructing answers. They pull from dozens of sources, identify which claims appear consistently across credible publishers, and synthesize a single response. 

The data backs up just how significant this shift is: organic CTRs for queries featuring AI Overviews have dropped 61% since mid-2024. Even on queries without AI Overviews, organic CTRs fell 41%. Users are simply clicking less, everywhere.

The technical engine behind this is retrieval-augmented generation (RAG). The AI retrieves content from across the web, gathers potentially dozens of sources, identifies the claims that repeat most consistently across credible publishers, and generates a response based on that consensus.

Your goal isn’t just to publish a great page. It’s to be one of those sources. Repeatedly.

What the consensus layer actually is

Think of the consensus layer as the degree to which multiple AI systems produce consistent, repeatable outputs about your brand. It’s about pattern recognition at scale.

When AI systems encounter your brand described the same way across multiple credible sources, in the same category, with the same expertise, and with the same problems you solve, they build confidence. When they don’t see that pattern? You become a statistical outlier, and outliers get filtered out.

This happens because AI systems are engineered to prevent hallucinations. Their primary defense is corroboration: if multiple independent sources say the same thing, the AI assigns higher confidence to that claim. If only one source says it, the AI can become cautious or ignore it entirely.

This creates a rule most marketers haven’t fully internalized yet: isolated authority isn’t enough. You need distributed credibility.

I’ve seen this firsthand. A client ranking first for a competitive keyword, with solid traffic and strong domain authority, was invisible across ChatGPT. Why? Because that page existed in isolation. No corroboration, no distributed mentions, no external validation. 

As Will Scott wrote: “Brands aren’t losing visibility because they dropped from position three to seven. They’re losing it because they were never cited in the AI answer at all.”

Dig deeper: The infinite tail: When search demand moves beyond keywords

The signals that actually build consensus

So what signals do AI systems actually use? Here’s where to focus your energy.

Traditional authority is table stakes, not a finish line

Backlinks, domain authority, and topical depth remain foundational. But they’re no longer sufficient on their own. They get you in the game; consensus is what wins it.

Unlinked brand mentions matter more than most marketers realize

AI systems scan the web for brand references, even when those mentions aren’t linked. Unlinked mentions are growing in importance as signals for both traditional search and AI visibility. A mention in an industry publication with no link is still a consensus signal.

Nearly 9 out of 10 webpages cited by ChatGPT appear outside the top 20 organic results for the same queries, per a Semrush study. This tells you everything you need to know about how different this game is.

Publisher diversity signals broader credibility

Being mentioned repeatedly on the same domain doesn’t build consensus. Being mentioned across a range of credible, independent publishers does.

Diversity tells AI systems your authority isn’t contained to one corner of the web. It’s recognized broadly across your industry.

Community platforms are consensus gold

Reddit, Quora, and niche forums are becoming major consensus signals. AI systems increasingly pull from community discussions because they represent real user opinions and experiences. 

With Reddit dominating the SERPs, positive brand mentions in relevant subreddits contribute meaningfully to how AI systems perceive you. You can’t fake your way into genuine community trust, you have to earn it.

Entity clarity makes retrieval easier

Search engines use knowledge graphs to understand entities and how they relate to each other. If your brand is inconsistently described across platforms or your category is ambiguous, AI systems struggle to incorporate you into their answers. 

Structured data, schema markup, and JSON-LD are critical here. Google has explicitly stated that “structured data is critical for modern search engines.” The clearer your entity profile, the easier it is for AI to retrieve and cite you.

Get the newsletter search marketers rely on.


How to actually build consensus

Alright, let’s get tactical. Before you start building, you need to know where you stand.

Start with an LLM audit

Open ChatGPT, Perplexity, Gemini, and Google AI Overviews, and start asking questions the way your customers would. 

  • “What’s the best [tool/service] for [problem you solve]?” 
  • “Who are the leading [your category] providers?” 
  • “What do people say about [your brand name]?”

Pay attention to three things: 

  • Is your brand mentioned at all? 
  • If it is, is the information accurate and up to date? 
  • How are you being described relative to competitors? 

You may find outdated information, missing context, or, worse, a competitor owning the narrative in your category entirely.

This audit becomes your baseline. It tells you what gaps to close, what misinformation to correct, and where your consensus footprint is weakest. Only once you know that, should you start building.

Establish your owned media foundation

Your site needs to be technically sound and semantically clear. Use structured data. Establish explicit entity definitions, who you are, what you do, and what problems you solve. Reinforce those same entities and relationships across multiple pages within your site. 

Topic clusters, pillar pages supported by related subtopic content, create semantic reinforcement that signals depth and expertise. Without a strong foundation, nothing else sticks.

Treat earned media as consensus amplification

Press coverage, guest posts, podcast appearances, and expert citations distribute your authority across the web. More than links, digital PR is now about narrative control. 

One placement won’t move the needle. A sustained, coordinated presence across trusted publications will. Monitor your brand-to-links ratio, unlinked mentions alongside traditional link building is now the balanced strategy to pursue.

Publish original research

This is the highest-leverage consensus tactic most brands are underinvesting in. When you create genuinely novel data, an industry benchmark, a proprietary survey, original research, other publishers reference it naturally, journalists cite it, and AI systems incorporate it into answers. Establish yourself as the source for benchmark data in your niche, and you’ll earn citations for years.

Invest in expert-led content

AI systems are trained on vast amounts of text, including articles, research, and interviews. When your team members are consistently positioned as recognized experts, quoted in articles, cited in reports, and contributing bylined pieces, they become recognized entities that AI systems trust. Optimize author profiles with structured data, consistent bylines, and entity markup to reinforce this.

Participate genuinely in communities

This doesn’t mean dropping links in Reddit threads. It means answering questions, contributing knowledge, and building a reputation where your audience already hangs out. 

When users recommend your brand organically because they find it genuinely valuable, that’s your strongest consensus signal.

Dig deeper: Why surface-level SEO tactics won’t build lasting AI search visibility

Measuring what actually matters now

Traditional rankings tell you where you stand in search results. They don’t tell you whether AI systems are citing you. You need new metrics, and as more SEOs are recognizing, success metrics are shifting from clicks and traffic to visibility and share of voice.

Start by systematically testing high-value queries across Google AI Overviews, ChatGPT, Perplexity, and Gemini. Note when your brand appears, how it’s described, and which sources get cited alongside you. 

Track share of voice in AI responses, how often your brand gets mentioned relative to competitors in AI-generated answers. If competitors are consistently appearing and you’re not, you’re losing the consensus battle regardless of how your rankings look.

Also monitor cross-domain mention density (how many unique domains reference your brand) and entity co-occurrence (how often your brand appears alongside relevant topics, competitors, and concepts). These give you a real picture of your consensus footprint and where the gaps are.

The new SEO playbook

The brands winning in AI-driven search aren’t necessarily the ones with the best content or the highest domain authority. They’re the ones building distributed credibility, authority that appears consistently across owned media, earned media, and community platforms.

As Google’s Danny Sullivan said, “Good SEO is good GEO.” The fundamentals haven’t disappeared, but they’re now table stakes, not differentiators. The new formula is: authority + consensus + distribution.

Integrate SEO, digital PR, and community engagement into one cohesive strategy. Building a distributed network of authority, mentions, citations, and community validation that takes time to construct, and is nearly impossible for competitors to dismantle overnight.

That’s the visibility moat worth building, and the clock is ticking.

Dig deeper: Content alone isn’t enough: Why SEO now requires distribution

Adobe to shut down Marketo Engage SEO tool

Adobe logo

Adobe will shut down the SEO feature in Marketo Engage at the end of March 2026, according to its February 2026 release notes.

The tool will be deprecated on March 31, and you must export any existing SEO data before then. (This page includes links to the export instructions.) The SEO tile will be removed from the platform on April 1.

What happened?

Adobe’s Keith Gluck said deprecating low-use features lets the Marketo Engage team focus on other areas of the platform. For your SEO needs, Adobe announced in 2025 that it was acquiring Semrush, a full-featured SEO and visibility tool. (Reminder: Semrush owns Third Door Media, the publisher of Search Engine Land.)

The deprecation came as no surprise if you follow Marketo news closely. Reports suggest few people fully configured the SEO tool, and its features didn’t seem to be a priority for the Marketo Engage product team in recent years.

With LLMs rapidly changing the search landscape, it was time to say goodbye. The arrival of Semrush into the Adobe family provided the perfect opportunity.

Why your law firm’s best leads don’t convert after research

Why your law firm’s best leads don’t convert after research

If your law firm’s referrals aren’t converting, validation may be the problem.

Referred prospects don’t go straight from recommendation to contact. They research, compare, and verify what they were told — on your website, in search results, and through AI tools.

These are your highest-value leads — pre-sold through trusted recommendations and expected to be your easiest conversions. But when that validation falls short, even they lose momentum. 

This is the referral validation gap: the moments during online research when trust is broken rather than built. Here’s where referral validation fails and how to fix it.

While this article focuses on law firms, the same dynamics apply to any referral-based business.

The four types of referral validation failure

Referral loss follows predictable patterns — and once you can spot them, you can fix them.

  • Credibility gaps: When your digital presence doesn’t match the expectations set by the referral.
  • Specificity gaps: When your content doesn’t reflect the specific problem the prospect was referred for.
  • Authority gaps: When third-party or AI validation fails to confirm your expertise.
  • Friction gaps: When prospects are ready to act but encounter unnecessary barriers to conversion.

1. Credibility gaps

In under three seconds, a website visitor forms a first impression. If your site doesn’t immediately validate what the referrer said about you — if it looks outdated, generic, or fails to showcase the specific expertise they praised — that trust becomes conditional.

A referred prospect arrives expecting professionalism, confidence, and authority, only to encounter uncertainty. Thin attorney bios, generic claims (“experienced,” “trusted,” “results-driven”) without proof, or outdated design can all create hesitation.

The referral earned you consideration. Your digital presence determines what happens next.

The prospect’s reaction is simple: This doesn’t look like what I was expecting. That moment of doubt is often enough to end the process.

What you can do about it

Implement practice area-specific landing pages with targeted H1s, schema markup for your specialties, and prominent visual trust signals (credentials, case results, awards) above the fold. Ensure mobile page speed stays under two seconds with Core Web Vitals optimization.

2. Specificity gaps 

Referrals are almost always problem-specific. The website they’re referred to rarely is.

Imagine a prospect referred for a complex custody dispute lands on a homepage about “family law.” A business owner referred for a ground lease negotiation sees “commercial real estate services.”

Nothing is technically wrong. But nothing confirms the recommendation. When a site fails to mirror the exact issue that prompted the referral, the prospect starts to question it: Does this firm actually specialize in my problem, or was the referral overstated?

At the same time, prospects are actively looking for proof — case results, credentials, relevant experience. If that evidence is buried, disconnected, or requires more than two clicks to find, momentum drops quickly.

What you can do about it

Create practice area-specific case study pages with structured data markup. Implement FAQ schema tied to common referral scenarios. Ensure content directly reflects the search intent behind the referral, and use internal linking to guide visitors from homepage → specific expertise → proof points within two clicks.

3. Authority gaps

Referral prospects are asking questions like: “Is this firm actually good at complex custody cases?” or “Do they have experience with ground lease negotiations in New York?” — increasingly through AI search tools.

If AI tools can’t find credible, structured information on your site to validate the referral, they won’t confirm it. And if competitors provide clearer answers, those are the sources AI will surface. This creates an immediate form of negative validation. The prospect starts to question the recommendation: If they’re so good, why aren’t they showing up here?

If a competitor has invested in content that’s structured for citation, the AI will quote them, reference their work, and position them as the authority, even though the prospect came to you through a trusted referral. You can’t claim authority. AI systems will either confirm or contradict it.

What to do about it

Forward-thinking firms are now monitoring a new metric: AI search share of voice — the percentage of relevant AI-generated answers that mention or cite your firm compared to competitors. Start by:

  • Identifying the 10-15 questions prospects most commonly ask about your practice areas.
  • Running those queries regularly through ChatGPT, Perplexity, and Google AI Overviews.
  • Documenting which firms appear, how often, and in what context.
  • Tracking whether you’re cited as a source, mentioned, or absent entirely.

If your firm’s content, credentials, and case results aren’t structured for AI parsing and citation, you’re invisible in these crucial validation moments regardless of how strong the initial referral was. Once you’ve identified where your competitors are outperforming you, create in-depth topic clusters around your specialties, and build authoritative content that answers the questions prospects ask AI tools. 

4. Friction gaps

Friction gaps occur after trust has already been established, but conversion still hasn’t happened. Common examples include:

  • No obvious next step above the fold.
  • Forms that are difficult to complete on mobile.
  • No immediate way to call, text, or book.

At this stage, prospects are ready to act. But any delay introduces doubt and gives them time to reconsider or move on. You’ve earned the referral. Your site validated your expertise. The prospect is ready to hire you — but can’t quickly figure out how to take the next step.

This is the final failure point in the referral validation gap: when a motivated, pre-sold prospect abandons because the conversion path is unclear, inconvenient, or unnecessarily complicated. You need to remove every obstacle between “I want to hire this firm” and “I’ve made contact.”

What to do about it

A referred prospect should be able to answer these questions within three seconds of landing on any page:

  • How do I contact this firm right now?
  • What happens when I do?
  • Is this going to be easy or painful?

Test it yourself: open your site on your phone and start a timer. Can you initiate contact within a few seconds without scrolling? Try it from a homepage, attorney bio, and practice area page. If the answer is no, you’re losing prospects at the finish line.

Get the newsletter search marketers rely on.


Your roadmap to close the referral validation gap

Closing the referral validation gap doesn’t require a complete digital overhaul on day one. Strategic, phased implementation will allow you to see quick wins while building toward comprehensive optimization. Let’s look at the steps you can take.

Quick wins: Remove immediate friction

These are some changes that require minimal investment but can immediately reduce referral abandonment:

  • Adding a prominent click-to-call button in mobile header (and ensuring that it’s visible without scrolling).
  • Testing form completion on mobile devices and reducing any fields to essential only.
  • Ensuring page load speed under two seconds on mobile (test via PageSpeed Insights).
  • Verifying that “Contact Us” is visible on every page without scrolling.
  • Adding a secondary CTA option (for example, many prospects prefer “Schedule Consultation” over “Contact”).
  • Testing that your firm’s phone number is clickable on mobile across entire site.

Medium-term: Build validation infrastructure

These initiatives can require more investment but, over time, can generate a sustainable competitive advantage:

  • Creating dedicated landing pages for each significant practice area.
  • Structuring each page with: a specific H1 tag, a detailed service description, any relevant credentials, relevant case results, an FAQ section, and a clear CTA.
  • Implementing schema markup (e.g., LegalService, Attorney, and FAQPage) on each landing page.
  • Building out an internal linking strategy that guides visitors from homepage → specific expertise → proof points in two clicks maximum.
  • Developing 3-5 detailed case studies per practice area (these can be anonymized where required).
  • Writing blog posts that address the specific questions prospects ask during the research phase.
  • Ensuring all content includes author attribution with credentials to build E-E-A-T signals.

Long-term: Dominate AI search validation

These strategic initiatives can position your firm for sustained advantage in an AI-driven search environment:

  • Creating entity-based content that AI models can parse and cite (e.g., detailed attorney bios, practice area guides, or legal topic explanations).
  • Developing topic clusters: pillar pages for major practice areas with supporting cluster content that addresses related queries.
  • Optimizing content for the natural language queries that prospects ask AI tools.
  • Building citation-worthy resources such as comprehensive guides, state-specific legal explanations, and process walkthroughs.
  • Identifying 15-20 high-value queries prospects use to validate referrals.
  • Monitoring how your firm appears in ChatGPT, Perplexity, and Google AI Overview responses monthly.
  • Tracking competitor mentions and citation patterns.
  • Adjusting content strategy based on AI search visibility gaps.

But, most importantly, don’t let this roadmap overwhelm you. The firms that successfully close the referral validation gap don’t do it by accomplishing everything all at once. Instead, they start with a single, crucial decision: acknowledging that the gap exists. And then they take the first step to fix it.

Once you accept that your best leads are researching you — on your website and through AI tools — and making judgments based on what they find (or don’t find), your path forward for fixing that gap will become clear.

2026 is your firm’s inflection point

Prospects are getting their answers without ever visiting your website. The gap between digital presence and digital authority is widening — and for firms that wait, it becomes unbridgeable.

Closing the referral validation gap isn’t just about improving conversion rates. It means:

  • Capitalizing on your highest-value leads.
  • Reducing customer acquisition costs.
  • Building a compounding advantage.
  • Creating momentum in an AI-driven search environment.

Firms that master this will pull ahead. Those that don’t will watch their best leads slip away — one validation failure at a time.

A referral gets you consideration. Your digital presence determines what happens next. Closing the referral validation gap turns trust into conversion.

❌