Reading view

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About Us Would you like to be part of a fast-growing team that believes no one should have to succumb to viral-mediated cancers? Naveris, a commercial stage, precision oncology diagnostics company with facilities in Boston, MA and Durham, NC, is looking for a Digital Marketing Associate team member to help us advance our mission of […]
  • Position Summary: The Digital Marketing Specialist leads the evolution of digital marketing, early adoption and integration of AI-enabled marketing at Scot Forge, shaping how the company attracts, engages and converts customers and candidates in a rapidly evolving digital landscape. This person plays a key role in driving demand generation and is responsible for managing our […]
  • About AllTrails  AllTrails is the world’s most popular and trusted platform for outdoor exploration. We connect people to the outdoors, help them discover new places, and elevate their experiences on the trail. With the most comprehensive collection of trails in the world, AllTrails supports inclusive access to nature for a global community of millions of […]
  • Please note that only applicants from the Philippines will be considered for this role. Job Title: Senior SEO Specialist / SEO Specialist Employment Type: Full Time Position Type: Additional Headcount Primary Location:  Remote Main Purpose of the Role This is a full-time and permanent role for a data-driven SEO professional who will support the Head […]
  • Why Join CIP? Vacation Time – 15 days full time only Paid Holidays – 13 days full time only;Holiday premium pay for part time only Paid Sick Days and Personal Days accrued Medical, Dental and Vision Insurance Voluntary Benefits: Short and Long-term Disability, Additional Life, Child Life and Spousal Life Dependent Care Flexible Spending Account […]
  • Company Description Warren Resort Hotels is a collection of properties across some of the West’s most popular destinations, offering a balance of comfort, quality, and value. Based in Santa Maria, CA, we focus on creating welcoming stays with strong service, well-maintained properties, and thoughtful guest experiences. Each of our hotels has its own character, but […]
  • Job Description Attention: Kapitus is aware that individuals posing as recruiters may be communicating with job seekers about supposed positions with Kapitus. Kapitus has received reports that the content and method of communication can vary, but messages may contain requests for payment (e.g., fees for equipment or training) and/or for sensitive financial information. Kapitus will […]
  • SEO & AI Search Consultant At Botify, we’re redefining SEO consulting by combining cutting-edge technology, a culture of innovation, and the brightest minds in the industry. As an SEO Consultant, you will play a pivotal role in solving our customers’ most complex SEO challenges while delivering measurable business outcomes. You’ll work hand-in-hand with industry-leading brands […]
  • As an SEO Specialist at Corkboard Concepts, you will be responsible for developing and implementing effective search engine optimization (SEO) strategies to increase website traffic and improve organic search rankings. You will identify key SEO growth opportunities and develop plans to capitalize on them. Your role will involve conducting thorough keyword research, optimizing website content […]
  • Take us to the top! If you’re the kind of marketer who loves the thrill of ranking #1 on Google, thrives on building pipelines that overflow with leads, and gets just as excited about data dashboards as you do about killer creative ideas, keep reading. Docutrend is the leading modern office and workforce technology company […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Description Salary: From 40k basic per year + annual bonus About Visca Web Visca Web is a digital marketing company operating in competitive international markets, with a strong focus on performance-driven acquisition across SEO and paid channels. We work in challenging, high-competition verticals (igaming) where creativity, adaptability, and data-driven decision-making are key to success. […]
  • Job Description Local or 100% Remote About Point ✨ Real Impact, Real People: Our mission at Point is to make homeownership more valuable and accessible. Your work directly helps homeowners access their wealth, achieve financial flexibility, and realize life changing goals. ✨ Funding: With over $175M raised from top investors like Andreessen Horowitz, WestCap, Greylock, […]
  • Description: Balance Health, a national leader in podiatry, is seeking a dynamic and analytical Paid Media Manager to orchestrate paid digital marketing. The Paid Media Manager will be responsible for driving new patient volume across paid channels from Google Ads (Search, Pmax, Demand Gen, etc.) to Meta Ads (Facebook, Instagram) to paid marketplaces (ex. ZocDoc) […]
  • Job Description Sircle Media is a social media agency based in New York City that works with some of the best brands in the CPG & Beverage verticals. We focus on strategy, execution, content development and paid social media buying to help our clients win online and in-store. Founded in 2012, Sircle has spent the […]
  • The Paid Search Strategist is a mid-level role within 829’s Paid Advertising department. In this role, you will drive the strategy and development of digital advertising campaigns for various medium-to-large clients. This position is ideal for someone who is proactive, a great communicator, analytical, and willing to go the extra mile to get your work […]

Other roles you may be interested in

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Senior Brand Insights Manager, Derflan Inc (Remote)

  • Salary: $181,400K
  • Own and evolve global brand tracking programs across 11+ international markets
  • Lead quarterly brand pulse initiatives across 13+ locales, ensuring rigor, consistency, and actionable insights

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Digital Marketing Manager 10x Health System (Scottsdale, AZ)

  • Salary: $110,000 – $120,000
  • Measure and report on the performance of all digital marketing campaigns against goals (ROI and KPIs).
  • Document and streamline digital marketing processes to scale the team and improve operations.

Marketing Manager – SEO & GEO, Care.com (Hybrid, Austin Texas)

  • Salary: $85,000 – $95,000
  • Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
  • AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.

Digital Marketplace Manager, Venchi (Hybrid, New York, NY)

  • Salary: $120,000 – $130,000
  • Define and execute channel-specific and cross-marketplace strategies, balancing brand positioning, commercial performance, and operational efficiency.
  • Manage Amazon advertising across Sponsored Products, Brands, and Display campaigns.

Advertising Media Manager, Vetoquinol USA (Remote)

  • Salary: $100,000 -$110,000
  • Develop and implement strategic advertising plans for Etail (Ecomm/Retail) accounts.
  • Analyzing advertising performance data with related ROAS & TACoS evaluations.

Programmatic Advertising Manager, We Are Stellar (Remote)

  • Salary: $75,000
  • Manage the day-to-day programmatic campaign approach, execution, trafficking optimization, and reporting across the relevant DSPs for your clients.
  • Build and present directly to client stakeholders programmatic campaign performance, analysis, and insights.

Marketing Manager, Backstage (Remote)

  • Salary: $100,000 – $140,000
  • Manage and optimize campaigns daily across Meta Ads, Google Ads, and other kay partners
  • Own forecasting, pacing, budget allocation, and optimization for high-scale monthly budgets..

Note: We update this post weekly. So make sure to bookmark this page and check back.

Google is fixing a Search Console bug that inflated impression counts

Google Search Console bug

Google is fixing a long-running Search Console bug that inflated impression counts. As the fix rolls out, reported impressions will decrease.

What happened. A logging error caused Google Search Console to over-report impressions starting May 13, 2025. Google today updated its Data anomalies in Search Console page:

  • “A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”

A Google spokesperson told Search Engine Land:

  • “We identified a reporting error in Search Console that temporarily led to an over-reporting of impressions from May 13, 2025 onward. Bug fixes are being implemented to ensure accurate reporting.”

What’s changing. Google is deploying fixes that will change how impressions are recorded and reported. As the rollout continues, you’ll likely see a drop in impressions in the Performance report. Clicks and other metrics aren’t affected.

The timeline. The issue began May 13, 2025 and persisted until now. Google said the correction will take several weeks to fully roll out across reporting.

Why we care. If your Google Search Console impressions change in the coming weeks, it will likely be due to this bug fix.

If you can’t say what problem your brand solves, AI won’t either

The compressed customer journey is exposing your search strategy problem

Customer journeys are collapsing into a single moment of evaluation. David Edelman recently described this shift as the convergence of behaviors that used to happen separately.

As decisions compress, brands need to be clearer about what they are trying to solve for the customer. Many organizations are increasing activity instead, without sharpening the underlying strategy.

The shift behind the compressed journey

Edelman’s argument, outlined in his March 2026 Think with Google essay, is built around a shorthand developed by Boston Consulting Group and Google: streaming, scrolling, searching, and shopping.

His central insight is that generative AI has snapped these four behaviors together so tightly that the old model — awareness, then consideration, then purchase, each in its own tidy lane — no longer describes reality. Consumers bounce between platforms, multitask, and shift fluidly between entertainment and intent.

The data point that stopped me cold: people are now asking AI-enabled search engines much longer, richer, more emotionally descriptive queries. Not keywords. Paragraphs. They share context, constraints, preferences, and urgency. 

The AI then breaks those queries into multiple search streams and synthesizes results in real time. What once required dozens of browser tabs — hours of work — now takes seconds.

Edelman draws two implications from this. 

  • The fundamental unit of competition has changed. Brands are now evaluated as solutions to specific situations, not as products within a category.
  • The familiar demand framework — create demand, capture demand, and convert demand — must be treated as simultaneous, not sequential. You can’t do them in order anymore because the journey doesn’t proceed in order.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

Enter Pogo — and Kelly’s uncomfortable truth

Walt Kelly gave us Pogo, the philosophical possum of Okefenokee Swamp, whose most celebrated utterance was the 1970 Earth Day poster declaration: “We have met the enemy, and he is us.”

Kelly’s most persistent target was not any external villain, but the human tendency to mistake activity for progress. His characters were always busy — scheming, planning, campaigning, reorganizing — and almost never clear on why.

Another line often attributed to him captures it just as well: “Having lost sight of our objectives, we redoubled our efforts.”

Read Edelman’s argument through that lens, and the pattern becomes harder to ignore. He describes brands racing to keep up with compressed customer journeys — more content, more specificity, more “answer audits,” more presence across platforms and formats. The advice is sound. 

But without clarity about what a brand is actually trying to solve for the customer, more content and more channels are just Pogo’s swamp creatures running faster through the same mud.

Dig deeper: Why clarity now decides who survives

The compression trap: When speed substitutes for clarity

Edelman is right that the journey is compressing. But compression can serve two different masters. 

For brands with crystal-clear positioning — brands that genuinely know what problem they solve and for whom — compression is a gift. It helps a consumer build confidence faster. 

Warby Parker, which Edelman cites approvingly, is a clean example: its home try-on program, transparent pricing, and frictionless returns all express a single, coherent answer to a specific question: “Can I trust buying glasses without trying them in a store?” Every element of that brand experience is aimed at one objective.

For brands that lack that clarity — brands that have accumulated messaging layers over years of campaign-by-campaign marketing — compression is a disaster. The consumer’s AI-enabled query now synthesizes everything a brand has ever said across every channel, every format, every platform. 

If those signals are inconsistent, contradictory, or simply incoherent, the synthesized answer will be a muddle. The consumer will move on. In Pogo’s swamp, the creature that runs fastest without knowing where it’s going simply reaches the wrong destination sooner.

Edelman gestures at this when he writes that brand should be understood as “the sum of signals that make a company recognizable as a solution.” 

He’s right. But I’d push harder: the compression of the customer journey isn’t primarily a technological problem. It’s an objectives problem. 

Most brands can’t clearly articulate, in a single sentence, what specific situation they are the best answer to. If you can’t say it plainly, AI certainly can’t infer it.

Dig deeper: Why AI availability is the new battleground for brands

Get the newsletter search marketers rely on.


Pogo would recognize the funnel debate immediately

One of Edelman’s shrewder observations is that some of his clients have constructed a “false trade-off between brand and performance.”

Marketing departments argue over budget allocations between brand-building and demand generation as though they are fundamentally separate activities. This is, as Kelly’s characters would say, a very impressive argument that completely misses the point.

Kelly spent years satirizing exactly this kind of internal organizational warfare — committees forming to study committees, campaigns launched to counteract the confusion caused by previous campaigns. 

Organizations are often earnest and busy, and just as often distracted by their own processes. The brand-versus-performance debate is the marketing equivalent of explaining why two teams can’t collaborate because their mandates are structured differently.

In a compressed journey, brand is performance.

  • The clarity of a brand’s positioning determines whether it surfaces as the right answer to a specific query.
  • The quality of its content determines whether it captures demand at the moment of confidence.

These are the same thing viewed from two angles. 

The brands winning in Edelman’s compressed journey world — Nike, Glossier, IKEA, Warby Parker — don’t appear to be having this argument internally. They have simply decided what problem they solve and built everything around that answer.

Dig deeper: Brand perception: How to measure and shape it

The ‘answer audit’ is only half of the solution

Edelman recommends something he calls a “recurring answer audit”: examine what a consumer would actually encounter across social discovery, video search, retail listings, and AI assistants for their most common customer scenarios. Gaps and inconsistencies, he says, quickly become visible.

This is excellent advice. It’s also, if I’m being blunt in the spirit of Kelly, only half the medicine. An audit shows you where your signals are inconsistent. It doesn’t tell you what they should be consistent about. 

You can audit your way to a perfectly coherent set of messages that still fail to answer any real consumer question, because the messages were never designed around actual consumer situations in the first place.

You need to audit your objectives. What, precisely, is your brand the solution to? Not the product category. Not the feature set. The actual situation.

The specific tension in a person’s life that this brand, and not a competitor, is best positioned to resolve. Until that question is answered with unambiguous clarity, the answer audit is tidying the swamp without draining it.

Dig deeper: How to apply ‘They Ask, You Answer’ to SEO and AI visibility

What Edelman gets completely right

None of this is meant to diminish what Edelman has written. On the contrary, his framework for thinking about the compressed journey is the most coherent I’ve seen in years. 

Three of his observations deserve to be tattooed somewhere visible on the forearms, wrists, hands, necks, and behind the ears of every marketing professional.

‘Streaming and scrolling create possibility. Searching structures choice. Shopping happens wherever confidence peaks.’ 

That’s not just a description of a media landscape. It’s a theory of consumer psychology. Confidence is the triggering condition for a purchase. If you’re optimizing for impressions without asking whether those impressions build confidence, then you’re very busy going nowhere.

Brands must shift from ‘product language’ to ‘solution language.’ 

This sounds simple and is, in practice, revolutionary. The default mode of most brand organizations is to lead with what they make. 

Edelman says lead with the situation you resolve. That is a fundamental reorientation of how marketing is conceived and executed.

‘Are you the customer’s solution? Will they know it?’ 

Two questions. The first is a strategy question. The second is an execution question. Most marketing fails by answering the second question without having honestly answered the first.

Dig deeper: The authority era: How AI is reshaping what ranks in search

We have met the enemy

Kelly’s Pogo ran for 25 years, and the swamp never did drain. The characters were charming, the satire was sharp, and the folly continued because the creatures were incapable of distinguishing between effort and progress. Kelly found that funny.

Marketing history, filled with elaborate, energetic, and expensive campaigns from brands that no longer exist, is less amusing.

Edelman has given us a useful map of the compressed customer journey. It’s fast, complex, AI-mediated, and it rewards clarity above all else. What he understates — though it runs beneath the surface of his argument — is that compression is also a reckoning.

Brands built on accumulated momentum, legacy awareness, and category inertia will find that a faster journey exposes their vagueness more brutally than a slower one ever did.

The compressed customer journey demands better thinking. And better thinking, as Pogo understood, begins with recognizing that the problem isn’t out there in the swamp. It’s in here — in the planning meeting, the brand brief, the objectives slide that everyone in the room suspects isn’t quite right, but no one challenges.

With apologies to Pogo, “We have met the enemy of the compressed customer journey. And it’s our inability to clearly say what we are actually for.”

Strategy is the new keyword: What drives paid search performance now

Strategy is the new keyword: What drives paid search performance now

Over the course of my three-decade career, the keyword drove paid search. Today, it’s one of many signals. Strategy is what determines performance.

Keywords were what you researched for weeks, then built your strategy around based on what you uncovered or hypothesized. You managed everything from bids to matched search terms to negatives and the audiences you targeted. Your career was built and measured by how well you structured around a keyword.

Paid media has always been deeply tactical, with Google driving the majority of search. You were methodical about placements, audiences, bids, headlines, extensions, and keyword-stuffed URLs.

This model worked. It gave practitioners the control they needed to get results.

You could see which search queries triggered ads and what they cost. If there was value, you expanded or doubled down. You might over-segment ad groups by theme or build campaigns around keyword audiences, then layer in modifiers and match types to drive 1200% ROAS.

What changed across platforms

Advertising has converged on a single structural shift: AI, or more precisely, automation built into the platforms. These systems now handle targeting, bids, and creative assembly that practitioners used to manage manually.

The keyword hasn’t disappeared. It’s moved from the primary optimization lever to one signal among many that platforms use to deliver ads based on user behavior and the auction.

On Google, AI Max for Search is the clearest example. It’s not a new campaign type. It’s an optimization layer, similar to Smart Bidding, that changes how keywords function inside a search campaign. Google’s AI uses your existing keywords, copy, and landing pages, including H1s and H2s, as signals rather than instructions to find and serve ads.

Google reports that advertisers using AI Max see 14% more conversions at a similar CPA or ROAS, with campaigns using exact and phrase match seeing lifts of up to 27%. Pair it with Performance Max across Search, Shopping, YouTube, Display, Discover, Gmail, and Maps, or Demand Gen for upper-funnel awareness, and the system expands further.

Dig deeper: Google Ads no longer runs on keywords. It runs on intent.

The new primary levers

When I say strategy is the new keyword, I’m not speaking in abstractions. I’m saying there are specific inputs that now determine where your ads show up, who sees them, and whether they convert. These inputs have largely replaced the keyword list in paid media as the highest-leverage control.

The distinction matters. Strategy dictates the activity needed to achieve your goal and vision. Tactics are the execution. What’s shifted is that platforms now handle the tactics, and our job is to define the strategy that guides them.

Conversion data quality, including server-side tracking, has become the most important input in any account. Google’s Smart Bidding and other platform optimization systems depend on conversion or event signals to learn and improve.

You can prioritize from all to one, which conversions matter more, whether it’s a lead from a high-value market versus a newsletter sign-up, or a new customer versus a returning one. These distinctions used to be handled through keyword segmentation and bid modifiers. Now, in a small way, they’re handled through strategic conversation, where value is assigned or determined at that point.

First-party data, customer lists, CRM data, website behavior, and offline imports have become the equivalent of keyword research. The richer and cleaner the data you feed these systems, the better they perform. It’s less about search volume and more about understanding your own customer data, making sure it’s structured properly, and connected to the platforms you advertise in.

Creative is a beast. It’s moving from a production deliverable to a strategic signal.

For Demand Gen, Display, and Meta, your creative, functionally speaking, is your targeting. Platforms read your images, video, and copy to determine who sees your ads. Google AI Max generates headline and description variations based on your landing page content, your H1s, H2s, and so on.

The strategic questions, what themes resonate with which segments, what visual approaches drive action at different funnel stages, and what messaging frameworks allow AI to generate variations, now carry the weight the keyword used to.

Landing page and website quality have become paid media inputs, not just a thing for UX or CRO. AI Max reads your page to determine what queries to match and which headlines to generate. Final URL expansion in AI Max and Performance Max sends users to the page AI deems most relevant. Poor post-click experiences, thin content, and slow load times can tie back to lower conversion rates.

All of this limits AI’s ability to serve your ads.

Dig deeper: In Google Ads automation, everything is a signal in 2026

What it means for practitioners

Our roles have shifted.

The most valuable work is no longer managing keyword lists or adjusting manual bids. I have strong opinions on that, but I’ll ask you, what else could you be doing with your time, instead of manually adjusting bids for thousands of keywords?

It’s the strategic framework that AI systems operate within: ensuring data quality, defining creative strategy, building measurement into your teams, and knowing when the LLM is wrong and you, as an SME, need to adjust course.

The job of subject-matter experts is to guide the machines. That guidance takes the form of conversion architecture, audience signal quality, creative frameworks, and brand guardrails, rather than keyword lists and bid sheets.

This means investing time in understanding how:

  • These systems work.
  • Platforms learn.
  • LLMs prioritize.

It’s the pros and cons we choose to emphasize — the signals we prioritize. It means building robust first-party data, developing frameworks across audiences, creative, and UX, and feeding that into AI to enhance. It means accepting that the keyword era is giving way to something fundamentally different.

The practitioners who treat strategy as their primary lever, who invest their energy in architecture and design rather than lever-pulling, will be best positioned as this shift continues.

The keyword list isn’t gone. It’s no longer the center of the work. Strategy is.

Dig deeper: 4 times PPC automation still needs a human touch

Building high-ROAS ecommerce search campaigns in Google Shopping and Amazon Ads

Building high-ROAS ecommerce search campaigns in Google Shopping and Amazon Ads

Paid search is often the highest-leverage ecommerce growth channel, delivering strong conversion rates and efficient spend when structured effectively.

Google Shopping and Amazon Ads capture high-intent demand while generating the data needed to scale it. These platforms connect search queries directly to revenue, enabling you to identify which terms drive sales and allocate budget accordingly.

The real challenge is organizing campaigns to act on that signal.

Why paid search works so well for ecommerce

Paid search performs differently from other channels because it combines two advantages: intent and data.

  • Intent: Google and Amazon are search-driven environments. When someone searches for a product, they’re signaling exactly what they want. There’s no inference required, no audience modeling, and no interrupting someone mid-scroll. You’re providing the answer to a question the customer is already asking.
  • Data: Both Google Shopping and Amazon Ads provide keyword-level revenue data that most other advertising platforms can’t. You can see which search terms generated sales, at what conversion rate, and at what cost. Amazon goes further, offering clearer and more direct revenue visibility at the product and category level.

Together, these create a powerful feedback loop. Search terms tied to revenue let you shift spend toward higher-converting queries, improving ROAS over time. On Amazon, this loop extends further—stronger conversion rates can improve organic rankings, lowering future acquisition costs.

Success in search campaigns depends on building multi-funnel structures. The concept is consistent across platforms, but implementation varies by campaign types, settings, and bidding strategies.

The architectures outlined below use wide-net, low-cost discovery campaigns to map the full search landscape, then funnel high-intent, proven converters into dedicated performance campaigns with appropriate bids. The result: stronger ROAS, improved rankings, and more scalable growth.

Dig deeper: Ecommerce PPC: 4 takeaways that shape how campaigns perform

Google Shopping: The priority sculpting method

The priority sculpting method is based on Martin Roettgerding‘s approach, with adaptations over the years. It uses a three-layer campaign structure to route keywords into different campaigns based on performance.

This lets you control spend on discovery keywords and maximize investment in high-performing, high-intent terms. The key is Google Shopping priority settings — “high-priority” campaigns serve first at lower bids.

Layer 1: Brand

  • The goal is to capture branded search traffic.
  • This layer uses a Performance Max campaign and can also use standard Shopping.
  • It remains assetless to keep it focused on Shopping inventory and prevent bleed into Display and YouTube.
  • It’s set with a high ROAS target, as PMax tends to go after brand traffic naturally, especially when set with a high target ROAS.
  • Alpha terms are negatived in this campaign, as they may also have high ROAS.

Layer 2: Catch-all

  • The goal is to cast a wide net, test search terms cheaply, and generate conversion data.
  • This layer uses standard Shopping with a high-priority setting to catch non-branded traffic.
  • Bids are kept low to control costs.
  • Brand terms and alpha terms are negatived using a negative list.
  • Over time, low-performing terms are also negatived once they’ve been tested and failed.

Layer 3: Alpha

  • The goal is to dedicate budget to best-performing terms and generate strong ROAS.
  • This layer uses standard Shopping with a low-priority setting and high-ROAS bidding settings.
  • By negating converted terms, or alpha terms, in the catch-all campaign, those queries fall through to this campaign, where you bid aggressively on what’s already working.
  • Brand terms can also be negatived if needed.

Dig deeper: 6 Google Ads mistakes that hurt ecommerce campaigns

The key considerations in this structure include the following:

Routing logic using negatives

The system relies on routing logic: Google’s priority settings determine which campaign serves a query first. Negative keywords in the catch-all push proven converters into the alpha, where bids are higher and budget is protected. At the same time, non-alpha terms run through high-priority campaigns at the lowest possible bids.

The method lives or dies on weekly search term negation. Two actions are done regularly:

  • Negate non-converting terms in the catch-all. A good rule of thumb is over 20 clicks and zero conversions, these terms are negated. We’ve tested them, and removing them frees up the budget for other search terms. Note that this requires consideration before negating. If a keyword is highly relevant, you might want to let it run longer.
  • Negate converted terms (alphas) from the catch-all so they fall through to the alpha campaign. Over time, the alpha accumulates a curated list of proven terms bid on aggressively, while the catch-all keeps finding new ones cheaply. It’s a compounding system.

Shared budgets

Shared budgets are critical. Layers 2 and 3 should work on a shared budget.

The system works only if they run together, because each query needs to be sculpted through the system. It won’t work with separate budgets because if the budget on the catch-all high priority runs out, then the alpha would be the first contact, and the query would likely show on the alpha (at a higher bid), even though it’s not an alpha.

SKU separation

The system is designed to run across a unique set of SKUs. All three layers should target the same set of SKUs. It’s recommended to start with all SKUs to begin with and then build out from there.

Products that get buried in the main campaigns or operate at a different margin tier can be peeled off into their own mirrored catch-all/alpha pair, ring-fencing their budget. Only do this when there’s a clear reason. More campaigns mean more overhead and more fragmented data.

Feed quality

It’s important to optimize the feed, as Google heavily relies on titles mainly for understanding the context of the product and which keywords to serve it.

Get the newsletter search marketers rely on.


Amazon Ads: The multi-tier campaign architecture

Amazon’s campaign structure is more advanced than Google Ads and offers several advantages.

Amazon typically delivers higher conversion rates and more conversion data. Ad spend also drives both conversion rates and rankings, with a clear, measurable link between ad spend and organic ranking.

Ads drive traffic, traffic drives conversions, and conversion rate drives organic rank. That makes Amazon Ads an investment in organic search.

Google Ads campaigns run across the whole catalog. On Amazon, you build campaigns at the SKU level, typically one SKU per campaign.

The structure uses three campaign tiers: research, ranking, and performance. Each has a distinct goal and is managed by adjusting advertising cost of sale (ACOS) targets to reflect different profitability goals.

Tier 1: Research 

  • Campaigns use broad and phrase match keywords, along with automatic targeting.
  • The goal is to cast a wide net and generate keyword ideas and variations.
  • ACOS tolerance is relatively high, since the goal is data, not profit.

Tier 2: Performance

  • Campaigns use exact match keyword targeting.
  • The goal is profit, with a competitive ACOS target below break-even.
  • Move proven converters from the research tier into exact match campaigns. Run your best keywords at efficient bids to maximize returns on what’s already working. This mirrors the alpha campaign in Google Ads.

Tier 3: Ranking or exposure

  • Use single-keyword campaigns (SKCs) with exact match—one keyword per campaign.
  • The goal is usually ranking, though it can shift over time.
  • For ranking, set aggressive bids with high ACOS tolerance (often 50%+). Push volume through high-value keywords to drive top organic positions. Once you reach positions 1–3 organically, pause those keywords.
  • Ranking campaigns are debated. If you’re already ranking, there’s no need to pay for visibility you get for free.
  • This layer doesn’t exist in Google Ads, where ad spend doesn’t influence rankings.

Dig deeper: Why your Amazon Ads aren’t delivering: 6 critical issues to fix

The key considerations in this structure include:

Bidding to an ACOS lever

With Amazon Ads, we bid toward an ACOS target. ACOS is the advertising spend as a percentage of revenue. Because Amazon data is so clean and conversion rates are high, we can calculate our bids to drive a certain ACOS.

The ACOS-based bidding formula: 

  • Target bid = (Revenue per click) x Target ACOS

Implementing ACOS bidding can be automated using software like Scale Insights. Different campaign tiers can be assigned different ACOS targets, and CPCs can be adjusted daily by the software.

Keyword routing

Similar to Google Ads, keywords are funneled through from research campaigns into performance or alpha campaigns. This can be done manually or automatically with Scale Insights using an import rule. 

The concept is very similar in that keywords that shine get imported down the funnel, while non-performing keywords are phased out through testing.

The conversion rate signal

If a product’s conversion rate is below the market average on a given keyword, more spend will not likely improve its rank. Amazon usually surfaces the better-converting product. 

The correct response is to fix the underlying issue: price, listing quality, imagery, or the product itself. Most advertisers skip this step and keep spending into a hole.

The ranking cannibalization rule

There are two strong views on ranking and cannibalization. Some argue that once your product ranks highly for a keyword on Amazon, you should reduce or stop ad spend. If you’re ranking organically, you can save on ads.

On the other hand, if a keyword performs well with strong ROAS, having two listings can outperform one. It increases your chances of a click. Ads also typically appear above organic listings, giving you higher placement.

Whichever view you take, the three-tier method lets you drive rankings through SKCs, then reduce or stop ad spend once you rank, if you choose.

How Google Shopping and Amazon Ads compare for ecommerce

The underlying logic for advanced campaign setup is the same across Google Shopping and Amazon Ads, with key differences beyond the core structure.

Google Shopping (Priority sculpting)Amazon Ads (Multi-tier architecture)
Similarities– Route queries to campaigns via priority and negatives.
– Discover converting terms in a catch-all at a low cost.
– Graduate proven terms to alpha with high tROAS.
– Regular search term reviews, negatives, and alphas.
– Route keywords across research → ranking → performance.
– Discover new keywords in broad, phrase, and auto campaigns.
– Graduate proven terms to exact match for profitability.
– Regular search term reviews, negatives, and imports to lower funnel.
Differences– Run across the whole feed, separate high-margin products for ring-fenced budgets.
– ROAS-based bidding.
– Product feed determines search term targeting, and the advertiser is unable to select.
– Campaigns built at the SKU level rather than across the whole catalog.
– ACOS-based bidding.
– Search terms selected by advertiser.
– Ads drive rankings, and you can save budget by monitoring organic rankings.

Dig deeper: 5 reasons Amazon Ads is better than Google Ads for ecommerce



Which platform is right for your ecommerce strategy

Like all good answers, it depends heavily on your business and your goals. Both have advantages and disadvantages. We can say that:

  • Amazon Ads often perform better, delivering higher conversion rates and faster ranking and sales when intent is strong.
  • Google Ads is better for long-term brand building. It offers broader reach, potentially lower costs, and drives traffic to your own website, where you retain customer data.

The ideal is to run these together. Many brands may launch on Amazon and grow over to their own platforms and utilize Google Ads.

Paid search for ecommerce is probably the most effective advertising avenue you can explore. Both platforms offer significant opportunities when implemented properly. Each platform has pros and cons, and I would recommend further exploring the details in these campaign structures and deciding on the right implementation for your business.

Why AI search is your new reputation risk and what to do about it

Why AI search is your new reputation risk and what to do about it

It used to be that Google searches opened up a world of questions. You searched, sifted through links, and came to your own conclusion.

Today, AI Overviews, ChatGPT, Perplexity, and other AI platforms compress multiple sources into a single, synthesized response. In the process, nuance is flattened, and certain viewpoints can be overrepresented.

This marks a fundamental shift in online reputation management. Search engines now shape the information they surface. The result is a rise in zero-click behavior, where users accept AI-generated answers without visiting underlying sources.

For brands, that changes the stakes. Visibility no longer guarantees influence. Even a No. 1 ranking can be bypassed if the narrative tells a different story.

AI narrative formation: How AI systems deliver users their answers

AI search engines now follow a new pattern for delivering answers. For the sake of this article, we’ll call it AI narrative formation. Here’s how it works.

Source pooling

AI systems pull from a wide range of sources. While you might expect trusted, peer-reviewed content, they often draw from Reddit, YouTube, review platforms, complaint forums, and social media sites like Instagram and TikTok.

Signal weighting 

Not all sources carry equal weight. A single trusted source can be outweighed by a large volume of lower-quality content. For example, a highly active Reddit thread filled with negative reviews may outperform a fact-checked source like Wikipedia.

Narrative compression

AI condenses dozens of inputs into a short, digestible summary. In the process, nuance is lost, and fringe cases can become dominant themes. A complex reputation may be reduced to: “Users say this company is not trustworthy.”

Continued reinforcement

These summaries don’t stay contained. They’re screenshotted, shared, and repeated across platforms. Those repetitions become new inputs, reinforcing the same narrative in future AI outputs.

Dig deeper: The authority era: How AI is reshaping what ranks in search

How a finance company’s solid reputation unraveled in AI search

To see how AI narrative formation works in action, let’s look at a use case.

My company recently worked with a finance organization to repair its online reputation. For this example, we’ll call it Company X.

Problems emerged for Company X with the rise of Google AI Overview. Previously, under traditional SERPs, Company X had a solid reputation. Users searching Google for reviews would find a 4.2 rating on Trustpilot, a strong company website with employee bios, and numerous positive blog reviews from trusted sources.

Google AI Overview changed that. How? By resurfacing an old Reddit forum centered on negative complaints about Company X.

When users asked Google, “What are opinions like about Company X?” AI Overview delivered a clear answer: “Company X has mixed reviews, with specific complaints regarding customer service.” But those customer service issues were resolved nearly a decade ago.

AI Overview pulled multiple reviews from that Reddit thread, combined them with strong negative phrasing, and factored in the lack of structured positive content to form a semi-negative impression. A new perception of Company X was created.

Get the newsletter search marketers rely on.


Why AI search amplifies reputational risk

We can dig deeper into how AI impacts reputational risk. Consider the following:

  • How negative AI narratives spread: In traditional search, users had to dig for negative results. With LLMs, those results can surface instantly, even when they’re defamatory or incorrect.
  • Hallucinations and misinformation: Most users are now aware of AI hallucinations, but they aren’t always easy to spot. Making matters worse, LLMs can present incorrect claims or factual inconsistencies with confidence.
  • The snowball effect: As discussed in narrative reinforcement, AI-generated answers get screenshotted, shared, and repeated across platforms. That repetition builds momentum, creating challenges ORM firms now have to manage.

A hard truth has emerged in ORM: The most accurate claim doesn’t rise to the top. The most repeated claim does.

Dig deeper: Generative AI and defamation: What the new reputation threats look like

A step-by-step guide to auditing AI-generated narrative formation

Let’s walk through another case to see how an AI-generated narrative can be audited.

CEO X is the founder of a SaaS company. He has an ongoing thought leadership presence and a strong reputation in his industry.

On a recent podcast appearance, one quote was taken out of context and aggregated across several platforms. The quote was framed as an opinion rather than a fact. Blog posts were written, and Instagram Live reactions spread online.

In no time, ChatGPT and Google AI Overview turned CEO X into a controversial figure.

Here’s a step-by-step guide to approaching that reputation management crisis.

Step 1: Mapping queries

We begin by identifying what search engines are saying about CEO X. We ask ChatGPT and Google AI Overview questions such as “What did CEO X say?” and “What is CEO X’s current reputation?” This helps us analyze the issues.

Step 2: Capturing outputs

We identify the claims associated with CEO X. Google AI Overview and ChatGPT describe CEO X as a controversial figure who recently made comments in poor taste. The narrative formed across both platforms is trending negative.

Step 3: Delving through sources

Next, we analyze the sources AI Overviews and ChatGPT rely on. We look for whether they’re outdated, repetitive, or low quality. (In the case of Company X, the latter two apply.)

Step 4: Analyzing the narrative gap

We identify the gap between AI’s narrative and reality. 

  • What are CEO X’s actual views? 
  • What was the context of the quote? 
  • And what has their reputation been up to this point?

Step 5: Correcting and replacing sources

The final step is to replace or respond to those negative sources. Claims can be addressed directly on Reddit, Instagram, or other platforms spreading the narrative. Structured explanations should also be published through FAQs and policies, while strengthening third-party validation.

Dig deeper: How AI changes how we respond to negative reviews and comments

A new mindset: Reputation is now an output

Focusing solely on SEO rankings is no longer enough. We need to think in terms of narrative shifts and framing. That also means thinking in terms of inputs and outputs. 

Users aren’t evaluating individual pages. They’re engaging with AI-generated answers. Rather than managing what users find, we need to manage the answers AI systems deliver. That means strengthening what those systems rely on:

  • Publishing high-quality first-party content.
  • Earning credible third-party mentions.
  • Reinforcing positive customer reviews.
  • Addressing misinformation directly.
  • Improving structured data.
  • Maintaining accurate Wikipedia or Wikidata entries where applicable.

ChatGPT ads favor clarity over creativity, new data shows

Optimizing for ChatGPT Shopping: How product feeds power GEO

The new ChatGPT ad format is standardizing, according to a new Adthena analysis of 40,000+ daily placements. What once felt experimental is becoming a disciplined, high-intent system for users already deep in decision mode.

The big picture: ChatGPT ads are converging on a short, structured, highly contextual style that favors precision over persuasion and utility over storytelling, marking a shift from creative-led advertising to real-time, intent-driven assistance.

By the numbers. Every word must carry weight and contribute directly to clarity or conversion:

  • The average headline clocks in at just 30 characters and around 5 words.
  • Body copy averages 116 characters and roughly 19 words.

What’s working. The dominant pattern is a “Brand: Benefit” headline, separating the name from a specific value. It works because users in conversational environments expect immediate clarity, not intrigue or ambiguity.

  • Almost every ad leads with the brand name. You need easy recall in a setting where users are already evaluating options, not discovering them.

Headlines are compressed. Headlines often read like functional labels rather than slogans. This brevity carries into the body copy. It typically uses two tight sentences: a proof point followed by an offer or nudge, showing you’re not trying to win an argument but give one compelling reason to act.

Context mirroring is a defining feature. The strongest ads directly reflect the user’s query or situation, signaling real-time tailoring. This marks a new level of AI-native targeting that goes beyond keyword matching into conversational relevance.

Concrete value signals carry outsized weight. Dollar signs and specific numbers — prices, savings, performance — consistently outperform vague claims. Numbers dominate body copy because they feel credible and native in a setting where you’re actively researching and comparing options.

Offers. Low-friction offers — especially “free” trials or demos — are the most common conversion lever, reducing commitment barriers while users are exploring.

Calls to action. These are explicit and action-oriented, favoring direct phrases like “Shop now,” “Compare,” or “Book” while abandoning generic prompts like “Learn more.”

The overall tone. Calm, confident, and measured, with minimal exclamation points or question marks. It aligns more with helpful guidance than ad hype, helping ads blend into the conversational flow rather than disrupt it.

Why we care. ChatGPT ads reach users at high intent, where clarity and relevance matter more than creativity or storytelling. In a conversational environment, ads compete with useful answers, so vague or overly branded messages get ignored while precise, value-driven copy performs better. This shift rewards short, structured messaging and gives early adopters an advantage as the format standardizes.

Between the lines. While ChatGPT ads share DNA with paid search — especially in their focus on intent and relevance — they differ by integrating into dialogue, responding to high-intent users, and delivering messaging that feels assistive rather than interruptive.

The takeaway. Success in ChatGPT advertising depends on precision, relevance, and credibility over creativity, emotional appeal, or brand-led storytelling. The winning strategy: fit in perfectly when a user needs a clear, trustworthy answer.

The analysis. Adthena CMO Ashley Fletcher shared the data on LinkedIn.

Build your marketing ark: A framework for AI, empathy, and design

How to design AI-powered marketing systems that reduce friction and burnout

There’s a flood coming. A downpour of noise — more content, more channels, more AI-generated everything, moving faster than most teams can keep up with. Somewhere in that volume, your customers are quietly drowning — overwhelmed, underserved, and one bad experience away from choosing someone else.

You’ve probably felt it on your team, too. Another tool. Another sprint. Another quarter of doing more with less. The productivity metrics look fine from the outside. But inside, people are running on empty.

There’s an old story about a man named Noah who, facing catastrophic disruption, didn’t freeze or panic. He didn’t look for shortcuts or try to outswim the storm. He built — with intention, with a clear design, and with people he trusted. When the waters rose, the ark held.

The brands that lead don’t adopt the most technology the fastest. They build with intention — designing systems and experiences that protect people.

What follows is the case for building your ark — and a practical framework to do it.

The hidden emotional tax nobody is measuring

Customer-obsessed organizations achieved 49% faster profit growth and 51% better customer retention rates than their peers, according to Forrester. The gap between what customers need emotionally and what brands deliver comes down to design.

The strain isn’t only on the customer side.

  • AI power users report that it makes their overwhelming workload more manageable (92%), boosts creativity (92%), and helps them focus on their most important work (93%), per Microsoft and LinkedIn’s Work Trend Index,.
  • Yet, 60% of leaders say their company lacks a concrete AI vision or plan — meaning the very tool that could relieve team burnout is sitting underutilized. 

That gap shows up in real ways.

For customers, it creates friction — too many choices, unclear navigation, and messaging that misses where they are. They arrive with a question and leave with more confusion. They don’t feel seen or helped.

For marketing teams, the impact is quieter but just as serious:

  • Decision fatigue disguised as strategy.
  • Tool overload framed as innovation.
  • Burnout that looks like productivity — until it doesn’t.
  • Fragmented workflows that drain energy faster than they produce results.

Brands that recognize these human issues move faster, retain stronger talent, build deeper customer loyalty, and drive better business outcomes. Enter what I call the wellness sweet spot.

Where AI, empathy, and design come together

The wellness sweet spot is the moment where AI, empathy, and human-first design converge — creating conditions where both your customers and your team can think clearly, act confidently, and trust the experience they’re in.

It’s an architectural decision about how your entire marketing ecosystem is designed to make people feel. When its three pillars are genuinely working together, four things become true simultaneously:

  • AI reduces waste and cognitive load in the experience — making things simpler.
  • Emotional friction is intentionally minimized at every touchpoint.
  • Marketing teams operate from a foundation of wellness (and well-being).
  • Systems and workflows support human thriving, not just throughput.
The convergence of AI capability, empathy-led design, and human-first systems

When these conditions are in place, something shifts. AI stops feeling like a disruption and starts working as a stabilizing layer — supporting, protecting, and quietly holding the system together. It manages the overwhelm. The ark keeps floating.

Dig deeper: How to avoid decision fatigue in SEO

AI as an invisible wellness layer

Most marketing leaders still think about AI in terms of what it does — automate, generate, optimize, analyze. Those outcomes matter, but they don’t tell the full story. The more consequential question is how AI makes people feel while it’s doing those things.

For customers, AI used well is a guide that:

  • Summarizes complexity without dumbing it down.
  • Narrows choices in ways that feel helpful rather than manipulative. 
  • Anticipates what someone needs next and removes ambiguity from decision paths. 
  • Saves time — which is, in a very real sense, saving emotional energy.

For teams, thoughtfully deployed AI absorbs the work that depletes people most: the repetitive, the reactive, and the administrative. It creates space for what human brains do best: strategy, creativity, relationship-building, and nuanced judgment.

When you build your marketing systems around it, the output quality goes up because the people producing it aren’t running on fumes.

This is empathy at scale. Not the kind that lives in a tagline, but the kind that’s baked into how your systems are structured and how your content is designed to reach people.

Get the newsletter search marketers rely on.


The new emotional metrics: What to measure when you start caring about feelings

This is where things get practical and start to move ahead of the curve. Most marketing dashboards show what happened — click-through rates, conversion rates, and time on page. Those metrics matter, but they don’t explain why someone left or how they felt along the way.

Emotional metrics help fill that gap by focusing on the conditions under which decisions are made. Research in psychology and neuroscience shows that people make better decisions, build stronger brand relationships, and become more loyal when they feel clear, confident, and calm.

Here’s how traditional metrics map to emotional KPIs:

Traditional metricEmotional KPIWhat it measures, reimagined
Time on pageClarity indexHow quickly someone finds what they need — without confusion
Conversion rateDecision effort scoreCognitive load required to complete an action
Engagement rateCustomer calm markersBehavioral signals of confidence, not stress (Qualified attention)
Team output volumeWellness throughputStrategic output produced with reduced burnout

These are upstream indicators that help explain downstream performance. A low clarity index often shows up as stalled conversion rates. A high decision effort score can lead to rising cart abandonment. Declining wellness throughput tends to result in average output from top strategists.

Brands that start tracking these now gain an advantage over those that wait to react.

5 steps to design toward your wellness sweet spot

A caution before the roadmap: more speed and scale applied to a broken system will not fix it. It will amplify everything that’s wrong with it. These five steps are meant to be done before you push harder on AI adoption.

Step 1: Run an empathy audit

Where are customers confused? Hesitating? Leaving? Map these moments using behavioral data combined with qualitative insight — customer interviews, session recordings, support tickets, search data. Focus less on what people clicked and more on where they felt lost.

Step 2: Simplify for cognitive ease

Fewer choices. Plain language. Cleaner navigation. Every step you remove from a decision path is a small act of respect for your customer’s mental energy. This is generous. It’s designing with intelligence.

Step 3: Use AI as a shepherd

Deploy AI to enhance orientation, clarity, and confidence. Don’t push aggressive automation or manufacture a sense of urgency. AI should make customers feel helped, not herded. There’s a difference, and your audience feels it.

Step 4: Rebuild team workflows around energy

Audit where your team’s cognitive energy actually goes each week. Identify the work that is routine, reactive, or repetitive — and build AI into those gaps first. Protect the hours that require human judgment, creativity, and relationship-building. Those are the hours that drive real growth.

Step 5: Measure the feels

Begin tracking emotional outcomes alongside performance metrics. Start simple: add a one-question post-interaction survey. 

Review search data for confusion signals. For example, growing volume for “how do I” or “why can’t I” phrases on your own site may indicate your content isn’t answering questions before they’re asked. 

Monitor support ticket themes for friction patterns. A perfect measurement system isn’t required to start. The intention to look is.

Dig deeper: The secret to work-life harmony in SEO: Setting boundaries

The future belongs to emotionally intelligent brands

In a market where nearly every brand claims to be customer-centric and frictionless, the real differentiator comes down to how people feel and whether systems consistently deliver on that promise.

Leading organizations don’t rely on bigger AI budgets. They align technology with clear intent, prioritize well-timed, empathy-led content over volume, treat customer well-being as part of the brand promise, and protect their teams’ energy as rigorously as performance.

Creating value starts with protecting the people who create it. Noah didn’t survive the flood by ignoring it or fearing it. He paid attention, took action, and built with intention — something designed to carry what mattered most: his people, his purpose, his peace, and his future. That’s the kind of leadership this moment calls for.

You don’t have to figure this out alone. The tools are here. The framework is yours. The decision is whether to build before the pressure hits or react once it’s already underway.

Why your content doesn’t appear in AI Overviews (even if it ranks in the top 10)

Why your content doesn't appear in AI Overviews (even if it ranks in the top 10)

You’ve done everything right. You have a fast website with comprehensive content, pages ranking in the top 10, and a strong backlink profile. Yet when you search the query you rank for, your site doesn’t appear in Google’s corresponding AI Overview.

This is a retrieval problem, not a ranking issue. And the difference between the two is the most important shift SEOs need to understand right now.

AI Overviews don’t work like traditional organic rankings. Instead of considering which page has the most signals, AI Overviews look for the page that gives the cleanest, most usable answer.

If your content doesn’t meet that standard, your traditional search ranking is irrelevant. Here’s what’s going wrong, and how to fix it so your content appears in more AI Overviews.

The ranking-citation gap is real — and growing

The overlap between AI Overview citations and organic rankings grew from 32.3% to 54.5% between May 2024 and September 2025, according to a BrightEdge study.

This trend sounds encouraging. But it also means that even at peak convergence, nearly half of all AI Overview citations come from pages that don’t rank at the top of organic results. Google actively bypasses higher-ranking pages when it finds content that better serves the AI Overview format.

The pattern varies sharply by sector, though. BrightEdge data shows that in ecommerce, the overlap barely changed, remaining essentially flat over the entire 16-month period. And in your money or your life (YMYL) categories like healthcare, insurance, and education, the overlap between AI Overview citations and organic rankings ranges from 68% to 75%.

Ranking and visibility are no longer the same thing. You can rank second and be invisible. Or, you can rank on the second page and be the first thing a searcher reads.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

5 reasons AI Overviews skip your content

1. Your content answers the wrong version of the question

Informational queries — specifically long-tail and conversational searches — typically trigger AI Overviews. Informational queries drive 57% of AI Overviews, while commercial queries trigger this AI feature far less frequently, according to Semrush research.

Google’s AI engine  looks for content that matches what the user asks, not just the keyword you’ve targeted. So, an AI Overview answering the query “what’s the best way to manage a remote team’s workload?” probably won’t cite a page that ranks for the keyword “project management software” and leads with features and pricing.

2. You’ve buried the answer

If your introduction spends three paragraphs establishing context, warming up the reader, or restating the question before answering it, the retrieval system moves on. It seeks information it can extract cleanly. If that answer isn’t near the top of the page, the system skips that page.

3. Your structure is opaque to AI systems

Traditional SEO content is built around comprehensive long-form content: 3,000-word guides covering every angle of a topic, written for readers who scroll and skim.

AI retrieval systems don’t work the same way. They need to identify discrete, self-contained answers within your content.

That requires clear heading hierarchies, short paragraphs, and content that AI systems can extract. A section under a specific heading should completely answer the question posed in that heading, without requiring the surrounding context to make sense.

Content written as one long, unbroken narrative is harder for AI systems to parse. Even if every word is accurate and authoritative, it may not earn a citation if the structure doesn’t help the retrieval system identify individual answer units.

Dig deeper: AI Overview citations: Why they don’t drive clicks and what to do

Get the newsletter search marketers rely on.


4. Your E-E-A-T signals aren’t visible at the content level

Google has been clear that experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals are important for content quality in traditional search. It likely matters for AI Overviews, too. But these signals need to appear in the content itself, not just in your domain profile or link graph.

Strong domain authority counts for less than you’d think if the content itself carries no credibility signals.

  • Who wrote it?
  • Where did the data come from?
  • Is there anything here that couldn’t have been written by someone who’d never worked in this field?

A retrieval system evaluating an individual page doesn’t know your domain’s track record. The page must make the case for itself. 

Content-level E-E-A-T signals are particularly important in YMYL categories, where AI Overviews are selective about sources because the risk of misinformation is higher.

5. You’re targeting queries that don’t trigger AI Overviews

Before optimizing your content for AI engines, it’s worth checking whether your target queries trigger AI Overviews at all. As of late 2025, AI Overviews appear in 16% of search results, though that figure isn’t evenly distributed across query types.

Transactional queries, navigational searches, branded queries, and highly local searches are far less likely to trigger an AI Overview. If most of your traffic comes from commercial or transactional keywords, the lack of AI Overview citation may not be a content problem. It may simply be that those query types are less likely to generate overviews in the first place.

What the data tells us about the impact of this shift

The stakes are significant. Research by Seer Interactive shows that organic click-through rates (CTRs) for informational queries that displayed AI Overviews dropped 61%, from 1.76% to 0.61%, between June 2024 and September 2025. Paid CTR fell even further, from 19.7% to 6.34%.

But the same research reveals a critical asymmetry: Brands cited in AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than when they weren’t cited. A citation in an AI Overview doesn’t just protect you from a CTR decline. It actively amplifies your visibility.

The Pew Research Center’s study of searches by U.S. adults in March 2025 found that only 8% of users who encountered an AI Overview clicked a traditional search result, compared to 15% who clicked when no overview appeared. And 26% of searches with AI Overviews resulted in no clicks at all.

If AI Overviews appear for your most valuable queries and you aren’t cited, you aren’t just missing out on the overview. You’re losing clicks you previously received from the organic listing underneath it.

How to optimize for retrieval, not just rankings

These trends require you to adjust how you think about content structure and intent. Here’s where to focus:

  • Rewrite your introductions: Your first paragraph should directly and completely answer the primary question of the page. Save context and elaboration for later sections. Write as if the first 100 words of your page represent a standalone answer.
  • Restructure your headings: Each heading should be a question or a complete, specific claim. The following section should fully answer or support that heading without requiring the reader to review previous sections. Think of each section as a self-contained answer unit.
  • Add explicit expertise signals: Include author attribution with credentials, first-person experience language, original data, and links to primary sources and original research. These signals matter at the content level, not just at the domain level.
  • Audit your query triggers: Manually test your target queries in Google to see which ones actually generate AI Overviews. For those that do, study how the cited sources are structured, the length of the cited sections, and the format of the answer. Use that as your editorial brief.
  • Expand your topical coverage: AI Overviews favor sources that demonstrate breadth of knowledge across a topic, not just single-page depth. Focus on answering several related questions well instead of building one exceptional page surrounded by thin content.

Dig deeper: Want to beat AI Overviews? Produce unmistakably human content

How to shift your SEO approach

What AI Overviews represent is something that’s been discussed for years, but few have truly prepared for: the separation of content quality from ranking signals.

For two decades, we used rankings as a proxy for quality. High-ranking content was, by definition, good enough.

But that assumption no longer holds. Ranking in traditional search indicates that your brand has authority and that your page is relevant to the search query. It says nothing about whether your content is structured in a way that AI retrieval systems can use.

Visibility now goes to whoever understands how AI systems identify, extract, and surface answers. A strong backlink profile won’t help you if the answer is buried on page three of a 4,000-word guide.

Ranking in the top 10 is still worth pursuing. But it’s no longer the whole game.

6 Google Ads mistakes that hurt ecommerce campaigns

6 mistakes that hurt ecommerce campaigns on Google Ads

Your paid social operation is on fire. You know how your audience thinks, the creative process is dialed in, and the results get better every year. Leadership greenlights an expansion to Google Ads — a new channel and, critically, a new source of revenue.

As it turns out, applying that same strategy really just buys you an express ticket to a very difficult conversation.

Google rewards a different kind of thinking. Intent signals and campaign logic are different, and the mistakes that eat at your budget don’t always make themselves clear. Brands that apply their existing Meta playbook often find themselves looking at shiny dashboards and dull balance sheets.

These six common mistakes tend to do the most damage before anyone realizes what’s happening. They’re what we see most often when ecommerce brands come to us after making the move to Google — and they can all be reversed.

Mistake 1: Treating Google like a retention channel

You can definitely use Google Ads to support retention and brand defense. The problem is when that becomes your whole strategy.

We see this regularly with brands new to the platform who launch directly into Performance Max. Early ROAS looks strong, and everyone’s happy. But a few months in, someone asks the right question: Are we actually growing, or paying to capture purchases that were going to happen anyway?

One client we worked with came to us with branded search and retargeting doing the heavy lifting inside PMax – essentially a tax on demand that had already been created elsewhere. Revenue flatlined because, while the ad spend was real, growth was not.

Net-new customer acquisition requires a different setup.

  • Shopping campaigns structured to surface products to people who have never heard of the brand.
  • Search campaigns built around non-branded, high-intent keywords.
  • Layered PMax configurations that limit the system from defaulting to the easiest conversions.

When Google has enormous reach into new audiences, treating it purely as a closing channel leaves most of that opportunity untouched.

Dig deeper: Ecommerce PPC: 4 takeaways that shape how campaigns perform

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Mistake 2: Not knowing how to get the most out of Google’s core levers

Paid social experience transfers to Google in some ways, but there are four areas where we see the biggest knowledge gaps.

Search intent

Ads on social media are an interrupting moment. Ads in search engines meet people as they’re looking for something you offer. This changes so much about campaign structure, ad copy, and keyword targeting. 

Upper-funnel terms and lower-funnel terms require different approaches, bids, and landing pages. Collapsing them into a single campaign structure is one of the fastest ways to dilute intent and waste budget on traffic that was never going to convert.

Data feed optimization

For ecommerce brands running Shopping and retail Performance Max, the product feed is the foundation everything else is built on. Weak titles, missing attributes, and poor categorization limit how often your products show up and who sees them. 

Most brands (including Google-native ones) underinvest here because the work is unglamorous. But a well-optimized feed consistently outperforms one that’s neglected after setup.

Keyword research

Paid search is a keyword-driven channel, which makes keyword strategy its own discipline. Understand match types, search volume, commercial intent, and the relationship between what people type and what they actually want. This takes time to develop, but brands that skip this step usually over-restrict their reach or bleed spend on irrelevant traffic.

Landing pages

Sending high-intent but unfamiliar visitors straight to a product page on Google often underperforms. A more engaging landing page format, like an advertorial, puts that traffic in front of context and trust before asking for the sale. 

Brands coming from paid social often overlook this because the funnel architecture they’re used to doesn’t require it.

Dig deeper: 7 Google Ads search term filters to cut wasted spend

Mistake 3: Letting operational issues interrupt campaign momentum

Google’s algorithms need consistent data to make the best decisions for your account. But every time a campaign goes dark — for a day or a week — there’s a risk that the learning resets. What feels like a minor admin issue can mean weeks of degraded performance and wasted ad spend.

Two types of disruption come up more than any other.

  • Payments: Brands switching to invoice billing or changing card details mid-flight will sometimes see campaigns pause without realizing it until the damage is done. A lapsed payment that takes three days to resolve can cost far more than the bill itself once you factor in recovery time.
  • Tracking and feed integrity: A broken pixel means no conversion data, and forces Smart Bidding to optimize blind. A feed error in Merchant Center means products disappear from Shopping and Performance Max. Neither of these failures are loud, and they tend to surface slowly as declining performance that gets misattributed.

They are both preventable with automated alerts, weekly feed audits, and a person or AI agent responsible for monitoring account health between reporting cycles. The cost of oversight is low compared to what happens if you only discover issues after the fact.

Get the newsletter search marketers rely on.


Mistake 4: Building a campaign structure that’s too granular

The instinct among detail-oriented advertisers is to segment everything because it feels like control on the surface.

  • One campaign per product category.
  • One ad group per keyword.
  • Separate budgets for every audience.

But Google’s automation needs data to make good decisions. When you spread your budget across too many campaigns, each one operates on thin resources and even thinner information. Smart Bidding can’t optimize effectively without sufficient conversion volume, so campaigns stuck below that threshold tend to underperform and stay there.

By over-segmenting, you’ve created the appearance of precision while actually limiting the system’s ability to learn.

The same logic applies to budget. Ten campaigns with a modest shared budget will almost always produce worse results than three well-funded ones. Google needs room to test, adjust, and find the traffic worth paying for. Fragmented budgets don’t allow it to do that.

Build a tighter structure with fewer campaigns, clearly defined goals, and enough budget to compete. This gives the algorithm what it needs while keeping the account manageable enough to oversee effectively.

Dig deeper: How to find and fix the root cause of low conversions

Mistake 5: Leaving campaigns on Max Conversion Value with no ROAS targets

Max Conversion Value is a Smart Bidding strategy that tells Google to spend your budget in whatever way generates the highest total conversion amount – no ceiling, no floor, no efficiency guardrail. Left unsupervised, it will find conversions, but won’t care what it costs to get them.

For brands new to Google Ads, this setting can trick you into thinking you’re crushing it. Conversion value shoots up in the right direction, making the account appear healthy. The problem surfaces when you look at what you actually spent to generate that value.

Without a target ROAS, Google has no efficiency quotient, and optimizes for volume, not profitability. But the fix is straightforward.

  • Once you have enough conversion data, set a realistic target.
  • A ROAS goal gives the algorithm a constraint, and shifts the objective from spending budget to spending it well.
  • Targets set too aggressively too early can starve campaigns of traffic before they’ve had a chance to learn.
  • Exercise patience, and a willingness to adjust gradually rather than chasing the ideal number from day one.

Dig deeper: How each Google Ads bid strategy influences campaign success

Mistake 6: Underfunding campaigns and keeping them stuck in learning

When you launch a Google campaign or make a significant change (like doubling the budget), it enters a new learning period. This is the window for gathering data, testing different auctions, and calibrating toward the conversion patterns you’ve defined.

It’s a normal part of how the platform works, and every campaign goes through it.

But the learning period requires a minimum volume of conversions to complete. Google typically needs around 30-50 conversion events in a short window before bidding stabilizes. A campaign that’s underfunded for this milestone will stay in learning indefinitely.

It’s a common trap for brands being cautious when testing Google.

  • You run your first campaign on a small budget.
  • CPAs are inflated, and data is inconclusive, so you don’t invest more or cut it entirely.
  • In reality, the campaign never had what it needed to graduate out of the learning phase.
  • You walk away from net new revenue before you’ve even scratched its surface.

Funding a new campaign adequately from the start — even if it means consolidating into fewer campaigns and chasing fewer goals — gives it the best chance of learning fast and delivering accurate results sooner.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Adding Google to the mix is the right call: Here’s what to do next

Diversifying away from a single ad platform is one of the smartest moves an ecommerce brand can make once it’s mature enough to fight on two fronts. It removes growth from the anchor of one platform’s algorithm changes, auction dynamics, seasonality, terms of service, etc.

Adding Google to Meta also gives you access to a different kind of demand that is actively expressed rather than passively targeted, which is a meaningful advantage worth building on.

These six mistakes are not reasons you should avoid Google, but a preventative guide to help you approach it with realistic expectations and enough patience to let the system learn. Treating it like a direct analog of what you’re already doing on Meta will make you leave before seeing what’s truly possible.

If you’re still in the early stages of making this move, my guide on how to expand from Meta Ads into Google Ads is a practical place to start. If you’ve seen early success and are now looking for the next layer of optimization, find out how to avoid getting sucked into Google’s many automation traps.

Google adds channel performance timeline view to PMax campaigns

Google asserts ownership of all advertiser assets in Local Services Ads

Google launched a channel performance timeline view in Performance Max. It gives you a clearer breakdown of how Search, YouTube, Display, and other channels contribute to campaign results over time.

What’s new. A timeline graph shows channel-level contributions over a selected period, paired with investment and performance filters. You can quickly see which channels are pulling their weight — and which aren’t.

  • Yellow box – Channel Performance Evolution Over Time
  • Pink box (right) – All Ads, Ads Using Product Lists, Ads Using Video

Why we care. Performance Max campaigns run across multiple channels at once, making it difficult to see where your budget is most effective. This gives you a timeline view of channel-level contributions — so if YouTube is underperforming while Search drives most conversions, you can see it without digging through exports or relying on guesswork. You can spot channel-level trends earlier and adjust your asset strategy or budget accordingly.

The big picture. This view gives you a more actionable way to evaluate PMax performance without relying solely on Google’s automated decisions.

Bottom line. It’s not full transparency, but it’s a meaningful step in the right direction. You get a cleaner way to spot PMax trend anomalies early and adjust accordingly.

First spotted. This update was first spotted by Axel Falck, Head of Search at Le Mage du SEA, who shared it on LinkedIn.

Build your own AI search visibility tracker for under $100/month

Build your own AI search visibility tracker for under $100:month

Tracking your brand’s visibility in AI-powered search is the new frontier of SEO. The tools built to do this are expensive, often starting at $300 to $500 per month and quickly rising from there. For many, that price is a nonstarter, especially when custom testing needs go beyond what off-the-shelf software can handle.

I faced this exact problem. I needed a specific tool, and it didn’t exist at a price I could afford, so I decided to build it myself. I’m not a developer. I spent a weekend talking to an AI agent in plain English, and the result was a working AI search visibility tracker that does exactly what I need.

Below is the guide I wish I’d had when I started: a step-by-step playbook for building your own custom tool, covering the technology, the process, what broke, and how to get it right faster.

The problem: A custom tool for a complex landscape

My goal was to automate an AI engine optimization (AEO) testing protocol. This wasn’t just about checking one or two models. To get a full picture of AI-driven brand visibility, I knew from the start that we had to track five distinct, critical surfaces:

  • ChatGPT (via API): The most well-known conversational AI.
  • Claude (via API): A major competitor with a different response style.
  • Gemini (via API): Google’s direct, developer-facing model.
  • Google AI Mode: Google’s AI search experience, which uses Gemini 3 for advanced reasoning and multimodal understanding.
  • Google AI Overviews: The summary boxes that appear at the very top of the SERP for many queries, which by late 2025 were appearing in nearly 16% of all Google searches.

On top of that, I needed to score the results using a custom 5-point rubric: brand name inclusion, accuracy, correctness of pricing, actionability, and quality of citations. No existing SaaS tool offered this exact combination of surfaces and custom scoring. The only path forward was to build.

Here are a few screenshots of the internal tool as it stands. You can see some of my frustration in the agent chat window.

Vibe coded AI visbility tracking tool - Dashboard
Vibe coded AI visbility tracking tool - Test runs
Vibe coded AI visbility tracking tool - Analytics

The method: Using vibe coding to build the tool

This project was built using vibe coding, a way of turning natural language instructions into a working application with an AI agent. You focus on the goal, the “vibe,” and the AI handles the complex code.

This isn’t a fringe concept. With 84% of developers now using AI coding tools and a quarter of Y Combinator’s Winter 2025 startups being built with 95% AI-generated code, this method has become a viable way for non-developers to create powerful internal tools.

Dig deeper: How vibe coding is changing search marketing workflows

Your tech stack: The three tools you’ll need

You can replicate this entire project with just three things, keeping your monthly cost under $100.

Replit Agent

This is a development environment that lives entirely in your web browser. Its AI agent lets you build and deploy applications just by describing what you want. You don’t need to install anything on your computer. The plan I used costs $20/month.

DataForSEO APIs

This was the backbone of the project. Their APIs let you pull data from all the different AI surfaces through a single, unified system. 

You can get responses from models like ChatGPT and Claude, and pull the specific results from Google’s AI Mode and AI Overviews. It has pay-as-you-go pricing, so you only pay for what you use.

DataForSEO APIs

Direct LLM APIs (optional but recommended)

I also set up direct connections to the APIs for OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini). This was useful for double-checking results and debugging when something seemed off.

The playbook: A step-by-step guide to building your tool

Building with an AI agent is a partnership. The AI will only do what you ask, so your job is to be a clear and effective guide.

Here’s a repeatable framework that will help you avoid the biggest mistakes.

Step 1: Write a requirements document first

Before you even open Replit, create a simple text document that outlines exactly what you need. This is your blueprint. Include:

  • The core problem you’re solving.
  • Every feature you want (e.g., CSV upload, custom scoring, data export).
  • The data you’ll put in, and the reports you want out.
  • Any APIs you know you’ll need to connect to.

Start your conversation with the AI agent by uploading this document. It will serve as the foundation for the entire build.

Step 2: Ask the AI, ‘What am I missing?’

This is the most important step. After you provide your requirements, the AI has context. Now, ask it to find the blind spots. Use these exact questions:

  • “What am I not accounting for in this plan?”
  • “What technical issues should I know about?”
  • “How should data be stored so my results don’t disappear?”

That last question is critical. I didn’t ask it, and I lost a whole batch of test results because the agent hadn’t built a database to save them.

Step 3: Build one feature at a time and test it

Don’t ask the AI to build everything at once. Give it one small task, like “build a screen where I can upload a CSV file of prompts.” 

Once the agent says it’s done, test that single feature. Does it work? Great. Now move to the next one. 

This incremental approach makes it much easier to find and fix problems.

Dig deeper: How to vibe-code an SEO tool without losing control of your LLM

Get the newsletter search marketers rely on.


Step 4: Point the agent to the documentation

When it’s time to connect to an API like DataForSEO, don’t assume the AI knows how it works. Find the API documentation page for what you’re trying to do, and give the URL directly to the agent. 

A simple instruction like, “Read the documentation at this URL to implement the authentication,” will save you hours of frustration. My first attempt at connecting failed because the agent guessed the wrong method.

Step 5: Save working versions

Before you ask for a major new feature, save a copy of your project. In Replit, this is called “forking.” New features can sometimes break old ones. 

I learned this when the agent was working on my results table, and it accidentally broke the CSV upload feature that had been working perfectly. Having a saved version makes it easy to go back and see what changed.

Dig deeper: Inspiring examples of responsible and realistic vibe coding for SEO

What will break: A field guide to common problems

Nearly everything will break at some point. That’s part of the process. Here are the most common issues I ran into, and the lessons I learned, so you can be prepared.

ProblemThe lesson and how to fix it
1. API authentication failsThe agent will often try a generic method. 

Fix: Give the agent the exact URL to the API’s authentication documentation.
2. Results disappearThe agent may not build a database by default, storing data in temporary memory instead. 

Fix: In your first step, ask the agent to include a database for persistent storage.
3. API responses don’t show upYou might see data in your API provider’s dashboard, but it’s missing in your app. This is usually a parsing error. 

Fix: Copy the raw JSON response from your API provider, and paste it into the chat. Say, “The app isn’t displaying this data. Find the error in the parsing logic.”
4. Model responses are cut shortAn LLM like Claude might suddenly start giving one-word answers. This often means the token limit was accidentally changed. 

Fix: After any update, run a quick test on all your connected AI surfaces to ensure the basic parameters haven’t changed.
5. API results don’t match the public versionChatGPT’s public website provides web citations, but the API might not. 

Fix: Realize that APIs often have different default settings. You may need to explicitly tell the agent to enable features like web search for the API call.
6. Citation URLs are unusableGemini’s API returned long, encoded redirect links instead of the final source URLs. 

Fix: Inspect the raw data. You may need to ask the agent to build a post-processing step, like a redirect resolver, to clean up the data.
7. Your app isn’t updatedYou build a great new feature, but it doesn’t seem to be working in the live app. 

Fix: Understand the difference between your development environment and your production app. You need to explicitly “publish” or “deploy” your changes to make them live.

The real costs: Is it worth it?

Building this tool saved me a significant amount of money. Here’s a simple cost comparison against a mid-tier SaaS tool.

ItemDIY tool (My project)SaaS alternative
Software subscription~$20/month (Replit)$500/month
API usage~$60/month (variable)Included
Total monthly cost~$80/month$500/month

The biggest cost is your time. I spent a weekend and several evenings building the first version. However, I now have an asset that I can modify and reuse for any client without my costs increasing. 

The hidden costs are real: there’s no customer support, and you are responsible for maintenance. But for many, the savings and customization are worth it.

Dig deeper: AI agents in SEO: A practical workflow walkthrough

Should you build your own tool?

This approach isn’t for everyone. Here’s a simple guide to help you decide.

Build your own if:

  • You need a custom testing method that no SaaS tool offers.
  • You want a white-labeled tool for your agency.
  • Your budget is tight, but you have the time to invest in the process.

Stick with a SaaS tool if:

  • Your time is more valuable than the monthly subscription fee.
  • You need enterprise-level security and dedicated support.
  • Standard, off-the-shelf features are good enough for your needs.

For many SEOs, the answer is clear. The ability to build a tool that works exactly the way you do, for less than $100 a month, is a game-changer. 

The process will be frustrating at times, but you will end up with something that gives you a unique advantage. The era of the practitioner-developer is here. It’s time to start building.

Google Ads experiments now auto-apply results by default

Your guide to Google Ads Smart Bidding

Google Ads added an auto-apply setting to experiments. It’s on by default, so winning variants can go live without review.

How it works. You choose directional results (default) or statistical significance at 80%, 85%, or 95% confidence. One safeguard: if your chosen success metric performs significantly worse in the test arm, the change won’t auto-apply.

Why we care. Experiments are one of the most powerful tools in your account. Automating apply can speed testing, but removes a checkpoint where you catch unintended consequences before they hit live campaigns.

The catch. Experiments allow only two success metrics. A third metric you care about — one you didn’t or couldn’t select — can decline unnoticed. Guardrails protect what you told Google to watch, not everything that matters.

Bottom line. Auto-apply is a reasonable shortcut for simple tests. For anything consequential, keep manual review. Run the experiment, reach significance, then review full data before you apply changes.

First seen. Google Ads specialist Bob Meijer shared this update on LinkedIn.

Bing tests larger sponsored product carousel in shopping results

Microsoft Ads: How it compares to Google Ads and tips for getting started

Bing appears to be testing an expanded sponsored products section in its shopping results, featuring a double-row carousel that takes up significantly more space than the current format.

The test. The format pairs a large, double-row sponsored carousel with organic cards from individual sites below.

Why we care. If this rolls out broadly, it means more screen space for sponsored products — typically leading to higher visibility and more clicks if you run Microsoft Shopping campaigns. The double-row carousel is also more visually competitive, bringing Bing’s shopping ads closer to Google Shopping’s prominence.

The catch. The test appears limited — not all users see it. Search industry veteran Mordy Oberstein reported a more compact layout, suggesting Bing is still in early testing.

Bottom line. Bing runs many SERP experiments that never fully launch, so watch this one for now. If you run Microsoft Shopping campaigns, monitor impressions for any lift if the format expands.

First spotted. Sachin Patel shared a screenshot of the test on X.

SEO leads martech replacements, but not for the reason you think

Martech replacement survey

SEO tools were the most replaced martech application in 2025 — but not for the reason you might expect.

According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.

At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences — all of which challenge traditional keyword tracking and ranking-based workflows.

But the data tells a more nuanced story.

SEO tools: most replaced, but stabilizing

Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.

In other words, they’re now the most commonly replaced — but also more stable than before.

That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.

Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:

  • CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the survey’s history.
  • MAPs, email platforms, and CMS tools also declined compared to 2024.

Why SEO tools are being replaced

So if SEO tools aren’t being swapped out due to instability, what’s driving the changes?

The survey points to three primary factors:

1. AI capabilities

For the first time, the survey asked about AI’s role in replacement decisions — and the impact was significant.

  • 37.1% cited AI capabilities as an important factor.
  • 33.9% said they wanted AI capabilities when replacing a tool.

This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:

  • Content generation and optimization.
  • SERP analysis and intent modeling.
  • Workflow automation.

In many cases, replacing your SEO tool isn’t about abandoning SEO — it’s about upgrading to AI-native capabilities.

2. Cost pressures

Cost has become a major driver of martech replacement decisions, including SEO tools:

  • 43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
  • That’s up sharply from 23% in 2024 and 22% in 2023.

This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.

3. Changing needs in a shifting search landscape

As search behavior changes, so do expectations for SEO platforms.

Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:

  • Surface insights across AI-driven SERPs
  • Track visibility beyond clicks
  • Integrate with broader marketing and data systems

That evolution is likely contributing to replacement activity — even as overall stability increases.

AI is reviving custom-built SEO tools

One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.

Replacing commercial martech tools with homegrown applications accounted for:

  • 8.1% of replacements in 2025
  • Up from 3.4% in 2024 and 5% in 2023

This marks a meaningful shift after years of near-total reliance on commercial platforms.

“AI-assisted coding is changing the calculus of build vs. buy,” said martech analyst Scott Brinker. “It’s easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.”

For SEO teams, this could mean more organizations building:

  • Custom data pipelines.
  • Proprietary SERP tracking systems.
  • AI-driven analysis tools tailored to their specific needs.

Other martech categories show even greater stability

While SEO tools led in total replacements, the broader martech landscape is becoming more stable.

Several major categories saw declining replacement rates in 2025, including:

  • CRM platforms (down more than 12% year over year)
  • Marketing automation platforms
  • Email distribution tools
  • Content management systems

This suggests that many organizations are settling into core systems while selectively updating areas — like SEO — that are changing faster.

Methodology

Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.

A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.

Download the 2025 MarTech Replacement Survey, no registration required.

❌