Normal view

Yesterday — 3 March 2026Search Engine Land

Google Ads API enforces daily minimum budget for Demand Gen campaigns

3 March 2026 at 21:47
In Google Ads automation, everything is a signal in 2026

Google will begin enforcing a minimum daily budget for Demand Gen campaigns starting April 1, 2026.

What’s happening: The Google Ads API will require a minimum daily budget of $5 USD (or local equivalent) for all Demand Gen campaigns. The change is designed to help campaigns move through the “cold start” phase with enough spend for Google’s models to learn and optimize effectively. The update will roll out as an unversioned API change, applying across all buying paths.

Technical details:

  • In API v21 and above, campaigns set below the threshold will trigger a BUDGET_BELOW_DAILY_MINIMUM error, with additional details available in the error metadata.
  • In API v20, advertisers will receive a generic UNKNOWN error, with the specific validation failure referenced in the unpublished error code field.

The rule applies when modifying budgets, start dates, or end dates in ways that push daily spend below the $5 floor — covering both daily and flighted budgets.

Impact on existing campaigns. Current Demand Gen campaigns running below the minimum will continue serving. However, any future edits to budgets or scheduling will require compliance with the new floor.

Why we care. For advertisers and developers, this adds a new compliance layer to campaign management workflows. Systems will need updating to catch and handle the new validation errors before deployment.

The bottom line. Google is standardizing a minimum investment threshold for Demand Gen — prioritizing performance stability, while requiring advertisers to adjust budgets and automation accordingly.

The AI engine pipeline: 10 gates that decide whether you win the recommendation

3 March 2026 at 20:00
The AI engine pipeline- 10 gates that decide whether you win the recommendation

AI recommendations are inconsistent for some brands and reliable for others because of cascading confidence: entity trust that accumulates or decays at every stage of an algorithmic pipeline.

Addressing that reality requires a discipline that spans the full algorithmic trinity through assistive agent optimization (AAO). It also demands three structural shifts: the funnel moves inside the agent, the push layer returns, and the web index loses its monopoly.

The mechanics behind that shift sit inside the AI engine pipeline. Here’s how it works.

The AI engine pipeline: 10 gates and a feedback loop

Every piece of digital content passes through 10 gates before it becomes an AI recommendation. I call this the AI engine pipeline, DSCRI-ARGDW, which stands for:

  • Discovered: The bot finds you exist.
  • Selected: The bot decides you’re worth fetching.
  • Crawled: The bot retrieves your content.
  • Rendered: The bot translates what it fetched into what it can read.
  • Indexed: The algorithm commits your content to memory.
  • Annotated: The algorithm classifies what your content means across dozens of dimensions.
  • Recruited: The algorithm pulls your content to use.
  • Grounded: The engine verifies your content against other sources.
  • Displayed: The engine presents you to the user.
  • Won: The engine gives you the perfect click at the zero-sum moment in AI.

After “won” comes an 11th gate that belongs to the brand, not the engine: served. What happens after the decision feeds back into the AI engine pipeline as entity confidence, making the next cycle stronger or weaker.

DSCRI is absolute. Are you creating a friction-free path for the bots?

ARGDW is relative. How do you compare to your competition? Are you creating a situation in which you’re relatively more “tasty” to the algorithms?

Cascading confidence is multiplicative

Both sides of the AI engine pipeline are sequential. Each gate feeds the next.

Content entering DSCRI through the traditional pull path passes through every gate. Content entering through structured feeds or direct data push can skip some or all of the infrastructure gates entirely, arriving at the competitive phase with minimal attenuation.

Skipped gates are a huge win, so take that option wherever and whenever you can. You “jump the queue” and start at a later stage without the degraded confidence of the previous ones. That changes the economics of the entire pipeline, and I’ll come back to why.

Why the four-step model falls short

The four-step model the SEO industry inherited from 1998 — crawl, index, rank, display — collapses five distinct infrastructure processes into “crawl and index” and five distinct competitive processes into “rank and display.”

It might feel like I’m overcomplicating this, but I’m not. Each gate has nuance that merits its standalone position. If you have empathy for the bots, algorithms, and engines, remove friction, and make the content digestible, they’ll move you through each gate cleanly and without losing speed.

Each gate is an opportunity to fail, and each point of potential failure needs a different diagnosis. The industry has been optimizing a four-room house when it lives in a 10-room building, and the rooms it never enters are the ones where the pipes leak the worst.

Most SEO advice operates at the selection, crawling, and rendering gates. Most GEO advice operates at “displayed” and “won,” which is why I’m not a fan of the term. 

Most teams aren’t yet working on annotation and recruitment, which are actually where the biggest structural advantages are created.

Three audiences you need to cater to and three acts you need to master

The AI engine pipeline has an entry condition — discovery — and nine processing gates organized in three acts of three, each with a different primary audience.

Act I: Retrieval (selection, crawling, rendering)

  • The primary audience is the bot, and the optimization objective is frictionless accessibility.

Act II: Storage (indexing, annotation, recruitment)

  • The primary audience is the algorithm, and the optimization objective is being worth remembering: verifiably relevant, confidently annotated, and worth recruiting over the competition.

Act III: Execution (grounding, display, won)

  • The primary audience is the engine and, by extension, the person using the engine, where the optimization objective is being convincing enough that the engine chooses and the person acts.

Frictionless for bots, worth remembering for algorithms, and convincing for people. Content must pass every machine gate and still persuade a human at the end.

The audiences are nested, not parallel. Content can only reach the algorithm through the bot and can only reach the person through the algorithm. You can have the most impeccable expertise and authority credentials in the world. If the bot can’t process your page cleanly, the algorithm will never see it.

This is the nested audience model: bot, then algorithm, then person. Every optimization strategy should start by identifying which audience it serves and whether the upstream audiences are already satisfied.

Discovery: The system learns you exist

Discovery is binary. Either the system has encountered your URL or it hasn’t. Fabrice Canel, principal program manager at Microsoft responsible for Bing’s crawling infrastructure, confirmed:

  • “You want to be in control of your SEO. You want to be in control of a crawler. And IndexNow, with sitemaps, enable this control.”

The entity home website, the canonical web property you control, is the primary discovery anchor. The system doesn’t just ask, “Does this URL exist?” It asks, “Does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.

The push layer — IndexNow, MCP, structured feeds — changes the economics of this gate entirely. A later piece in this series is dedicated to what changes when you stop waiting to be found.

Act I: The bot decides whether to fetch your content

Selection: The system decides whether your content is worth crawling

Not everything that’s discovered gets crawled. The system makes a triage decision based on countless signals, including entity authority, freshness, crawl budget, perceived value, and predicted cost.

Selection is where entity confidence first translates into a concrete pipeline advantage. The system already has an opinion about you before it crawls a single page. That opinion determines how many of your pages it bothers to look at.

Crawling: The bot arrives and fetches your content

Every technical SEO understands this gate. Server response time, robots.txt, redirect chains. Foundational, but not differentiating.

What most practitioners miss is that the bot doesn’t arrive in a vacuum. Canel confirmed that context from the referring page can be carried forward during crawling. With highly relevant links, the bot carries more context than it would from a link on an unrelated directory.

Rendering: The bot builds the page the algorithm will see

This is where everything changes and where most teams aren’t yet paying attention. The bot executes JavaScript if it chooses to, builds the Document Object Model (DOM), and produces the full rendered page. 

But here’s a question you probably haven’t considered: how much of your published content does the bot actually see after this step? If bots don’t execute your code, your content is invisible. More subtly, if they can’t parse your DOM cleanly, that content loses significant value.

Google and Bing have extended a favor for years: they render JavaScript. Most AI agent bots don’t. If your content sits behind client-side rendering, a growing proportion of the systems that matter simply never see it.

Representatives from both Google and Bing have also discussed the efforts they make to interpret messy HTML. Here’s one way to look at it: search was built on favors, and those favors aren’t being offered by the new players in AI.

Importantly, content lost at rendering can’t be recovered at any downstream gate. Every annotation, grounding decision, and display outcome depends on what survives rendering. If rendering is your weakest gate, it’s your F on the report card. Everything downstream inherits that grade.

Act II: The algorithm decides whether your content is worth remembering

This is where most brands are losing out because most optimization advice doesn’t address the next two gates. And remember, if your content fails to pass any single gate, it’s no longer in the race.

Indexing: Where HTML stops being HTML

Rendering produces the full page as the bot sees it. Indexing then transforms that DOM into something the system can store. Two things happen here that the industry often misses:

  • The system strips the navigation, header, footer, and sidebar — elements that repeat across multiple pages on your site. These aren’t stored per page. The system’s primary goal is to identify the core content. This is why I’ve talked about the importance of semantic HTML5 for years. It matters at a mechanical level: <nav>, <header>, <footer>, <aside>, <main>, and <article> tell the system where to cut. Without semantic markup, it has to guess. Gary Illyes confirmed at BrightonSEO in 2017, possibly 2018, that this was one of the hardest problems they had at the time.
  • The system chunks and converts. The core content is broken into blocks or passages of text, images with associated text, video, and audio. Each chunk is transformed into a proprietary internal format. Illyes described the result as something like a folder with subfolders, each containing a typed chunk. The page becomes a hierarchical structure of typed content blocks.

I call this conversion fidelity: how much semantic information survives the strip, chunk, convert, and store sequence. Rendering fidelity (Gate 3) measures whether the bot could consume your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away.

Both fidelity losses are irreversible, but they fail differently. Rendering fidelity fails when JavaScript doesn’t execute or content is too difficult for the bot to parse. Conversion fidelity fails when the system can’t identify which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships between elements don’t survive the format conversion.

Something we often overlook is that even after a successful crawl, indexing isn’t guaranteed. Content that passes through crawl and render may still not be indexed.

That might sound bad enough, but here’s a distinction that should concern you: indexing and annotation are separate processes. Content may be indexed but poorly annotated — stored in the system but semantically misclassified. Non-indexed content is invisible. Misannotated content actively confuses the system about who you are, which can be worse.

Annotation: Where entity confidence is built or broken

This is the gate most of the industry has yet to address.

Think of annotations as sticky notes on the indexed “folders” created at the indexing gate. Indexing algorithms add multiple annotations to every piece of content in the index.

I identified 24 annotation dimensions I felt confident sharing with Canel. When I asked him, his response was, “Oh, there is definitely more.” 

Those 24 dimensions were organized across five annotation layers: 

  • Gatekeepers (scope classification).
  • Core identity (semantic extraction).
  • Selection filters (content categorization).
  • Confidence multipliers (reliability assessment).
  • Extraction quality (usability evaluation).

There are certainly more layers, and each layer likely includes more dimensions than I’ve mapped. Hundreds, probably thousands. This is an open model. The community is invited to map the dimensions I’ve missed.

Annotation is where the system decides the facts: 

  • What your content is about.
  • Where it fits into the wider world.
  • How useful it is.
  • Which entity it belongs to.
  • What claims it makes.
  • How those claims relate to claims from other sources. 

Credibility signals — notability, experience, expertise, authority, trust, transparency — are evaluated here. Topical authority is assessed here, too, along with much more.

Annotation operates on what survives rendering and conversion. If critical information was lost at either gate, the annotation system is working with degraded raw material. It annotates what the annotation engine received, not what you originally published.

Canel confirmed a principle I suggested that should reshape how we think about this gate: “The bot tags without judging. Filtering happens at query time.” Annotation quality determines your eligibility for every downstream triage.

I have a full piece coming on annotation alone. For now, annotation is the gate where most brands silently lose and the one most worth working on.

Recruitment: Where the algorithmic trinity decides whether to absorb you

This is the first explicitly competitive gate. After annotation, the pipeline feeds into three systems simultaneously. 

  • Search engines recruit content for results pages (the document graph). 
  • Knowledge graphs recruit structured facts for entity representation (the entity graph). 
  • Large language models recruit patterns for training data and grounding retrieval (the concept graph).

Before recruitment, the system found, crawled, stored, and classified your content. At recruitment, it decides whether your content is worth keeping over alternatives that serve the same purpose.

Being recruited by all three elements of the algorithmic trinity gives you a disproportionate advantage at grounding because the grounding system can find you through multiple retrieval paths, and at display because there are multiple opportunities for visibility.

Recruitment is the structural advantage that separates brands with consistent AI visibility from brands that appear inconsistently.

Get the newsletter search marketers rely on.


Act III: The engine presents and the decision-maker commits

Grounding: Where AI checks its confidence in the content against real-time evidence

This is the gate that separates traditional search from AI recommendations.

Ihab Rizk, who works on Microsoft’s Clarity platform, described the grounding lifecycle this way:

  • The user asks a question. 
  • The LLM checks its internal confidence. If it’s insufficient, it sends cascading queries, multiple angles of intent designed to triangulate the answer, which many people call fan-out queries. 
  • Bots are dispatched to scrape selected pages in real time. 
  • The answer is generated from a combination of training data and fresh retrieval.

But grounding isn’t just search results, as many people believe. The other two technologies in the algorithmic trinity play a role.

The knowledge graph is used to ground facts. AI Overviews explicitly showed information grounded in the knowledge graph. It’s reasonable to assume specialized small language models are used to ground user-facing large language models.

The takeaway is that your content’s performance from discovery through recruitment determines whether your pages are in the candidate pool when grounding begins. If your content isn’t indexed, isn’t well annotated, or isn’t associated with a high-confidence entity, it won’t be in the retrieval set for any part of the trinity. The engine will ground its answer on someone else’s content instead.

You can’t optimize for grounding if your content never reaches the grounding stage.

Display: The output of the pipeline

Display is where most AI tracking tools operate. They measure what AI says about you. But by the time you’re measuring display, the decisions were already made upstream, from discovery through grounding.

Brands with high cascading confidence appear consistently. Brands with low cascading confidence appear intermittently, the same phenomenon Rand Fishkin demonstrated.

Display is where AI meets the user. It also covers the acquisition funnel, which is easy to understand and meaningful for marketers. This is where most businesses focus because it’s visible and sits just before the click. I’ll write a full article on that later in this series.

Won: The moment the decision-maker commits

Won is the terminal processing gate in the AI engine pipeline. Ten gates of processing, three acts of audience satisfaction, and it comes down to this: Did the system trust you enough to commit?

The accumulated confidence at this gate is called “won probability,” the system’s calculated likelihood that committing to you is the right decision. Three resolutions are possible, and they form a spectrum. To understand why that spectrum matters, you need to understand the 95/5 rule.

Professor John Dawes at the Ehrenberg-Bass Institute demonstrated that at any given moment, only about 5% of potential buyers are actively in-market. The other 95% aren’t ready to purchase. You sell to the 5%, but the real job of marketing is staying top of mind for the other 95% so that when they decide to move to purchase, on their schedule, not yours, you’re the brand they think of.

The three scenarios that follow show how AI takes over the job of being top of mind at the critical moment for the 95%. I call this top of algorithmic mind.

  • The imperfect click: The person browses a list of options, pogo-sticks between results, and decides. Traditional search and what Google called the zero moment of truth. The system doesn’t know who is ready. It shows everyone the same list and hopes. The 95/5 efficiency is low. You’re hitting and hoping, and so is the engine.
  • The perfect click: The AI recommends one solution and the person takes it. I call this the zero-sum moment in AI. This is where we are right now with assistive engines like ChatGPT, Perplexity, and AI Mode. The system has filtered for intent, context, and readiness. It presents one answer to a person moving from the 95% into the 5% with much higher precision.
  • The agential click: The agent commits, either after pausing for human approval, “Shall I book this?” or autonomously. The agent caught the moment of readiness, did the work, and closed it. Maximum precision. This is the ultimate solution to the 95/5 problem: AI catches the exact moment and acts.
The Won Spectrum

Search won’t disappear. Most people will always want to browse some of the time. Window shopping is fun, and emotionally charged decisions aren’t something people will always delegate.

The trajectory, however, moves from imperfect to perfect to agential. Brands need to optimize for all three outcomes on that spectrum, starting now. Optimizing for agents should already be part of your strategy, as should optimizing for assistive engines and search engines. AAO covers them all.

Search engines, AI assistive engines, and assistive agents are your untrained salesforce. Your job is to train them well enough that you’re top of algorithmic mind at the moment the 95% become the 5%, and the AI either:

  • Offers you as an option.
  • Recommends you as the best solution.
  • Actively makes the conversion for you.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Served: The pipeline remembers

After conversion, the brand takes over. You should optimize the post-won feedback gate. The processing pipeline, the DSCRI-ARGDW spine, gets you to the decision. Served sits outside that spine as the gate that closes the loop, turning the line into a circle.

Every “won” that produces a positive outcome strengthens the next cycle’s cascading confidence. Every “won” that produces a negative outcome weakens it. Ten gates get you to the decision. The 11th, served, determines whether the decision repeats and your advantage compounds.

This is where the business lives. Acquisition without retention is a leak, both directly and indirectly through the AI engine pipeline feedback loop.

Brands that engineer their post-won experience to generate positive evidence, reviews, repeat engagement, low return rates, and completion signals, build a flywheel. Brands that neglect post-won burn confidence with every cycle.

Diagnosing failure in the pipeline

The three acts — bot, algorithm, engine, or person — describe who you’re speaking to. The two phases describe what kind of test you’re taking.

  • Phase 1: Infrastructure, discovery through indexing
    • Absolute tests. You either pass or fail. A page that can’t be rendered doesn’t get partially indexed. Infrastructure gates are binary: pass or stall.
  • Phase 2: Competitive, annotation through won
    • Relative tests. Winning depends not just on how good your content is but on how good the competition is at the same gate.

The practical implication is infrastructure first, competitive second. If your content isn’t being found, rendered, or indexed correctly, fixing annotation quality is wasted effort. You’re decorating a room the building inspector hasn’t cleared.

In practice, brands tend to fail in three predictable ways.

  • Opportunity cost (Act I: Bot failures)
    • Your content isn’t in the system, so you have zero opportunity. Cheapest to fix, most expensive to ignore.
  • Competitive loss (Act II: Algorithm failures) 
    • Your content is in the system, but competitors’ content is preferred. The brand believes it’s doing everything right while AI systems consistently choose a competitor at recruitment, grounding, and display.
  • Conversion leak (Act III: Engine failures)
    • Your content is presented, but the system hedges or fumbles the recommendation. In short, you lose the sale.
The AI engine pipeline - DSCRI-ARGDW-Sv

Every gate you pass still costs you signal

In 2019, I published How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Illyes about how Google calculates ranking bids by multiplying individual factor scores. A zero on any factor kills the entire bid.

Darwin’s natural selection works the same way: fitness is the product across all dimensions, and a single zero kills the organism. Brent D. Payne made this analogy: “Better to be a straight C student than three As and an F.” 

As with Google’s bidding system, cascading confidence is multiplicative, not additive. Here’s what that means:

Per-gate confidenceSurviving signal at the won gate
90%34.9%
80%10.7%
70%2.8%
60%0.6%
50%0.1%

Illustrative math, not a measurement. The principle is what matters: strengths don’t compensate for weaknesses in a multiplicative chain.

A single weak gate destroys everything. Nine gates at 90% plus one at 50% drops you from 34.9% to 19.4%. If that gate drops to 10%, it kills the surviving signal entirely. A near-zero anywhere in a multiplicative chain makes the whole chain near-zero.

This is competitive math. If your competitors are all at 50% per gate and you’re at 60%, you win: 0.6% surviving signal against their 0.1%. Not because you’re excellent, but because you’re less bad. 

Most brands aren’t at 90%. The worse your gates are, the bigger the gap a small improvement opens. Here’s an example.

GateDSCRIAReGDiWSurviving Signal
DiscoveredSelectedCrawledRenderedIndexedAnnotatedRecruitedGroundedDisplayedWon
Your Brand75%80%70%85%75%5%80%70%75%80%0.4%
Competitor65%60%65%70%60%60%65%60%65%60%1.8%

I chose annotated as the “F” grade in this example for demonstrative purposes.

Annotation is the phase-boundary gate. It’s the hinge of the whole pipeline. If the system doesn’t understand what your content is, nothing downstream matters.

Applying this Darwinian principle across a 10-gate pipeline, where confidence is measurable at every transition, is my diagnostic model. I recently filed a patent for the mechanical implementation.

Improving gates versus skipping them

There are two ways to increase your surviving signal through the pipeline, and they aren’t equal.

Improving your gates

Better rendering, cleaner markup, faster servers, and schema help the system classify your content more accurately. These are real gains, single-digit to low double-digit percentage improvements in surviving signal.

For many brands and SEOs, this is maintenance rather than transformation. It matters, and most brands aren’t doing it well, but it’s incremental.

Skipping gates entirely

Structured feeds, Google Merchant Center and OpenAI Product Feed Specification, bypass discovery, selection, crawling, and rendering altogether, delivering your content to the competitive phase with minimal attenuation. 

MCP connections skip even further, making data available from recruitment onward with triple-digit percentage advantages over the pull path.

If you’re only improving gates, you’re leaving an order of magnitude on the table.

The highest-value target is always the weakest gate

Improving your best gate from 95% to 98% is nearly invisible in the pipeline math. Improving your worst gate from 50% to 80% transforms your entire surviving signal. That’s the Darwinian principle at work: fitness is multiplicative, the weakest dimension determines the outcome, and strengths elsewhere can’t compensate.

Most teams are optimizing the wrong gate. Technical SEO, content marketing, and GEO each address different gates. Each is necessary, but none is sufficient because the pipeline requires all 10 to perform. Teams pouring budget into the two or three gates they understand are ignoring the ones that are actually killing their signal.

Then there’s the single-system mistake. At recruitment, the pipeline feeds into three graphs, the algorithmic trinity. Missing one graph means one entire retrieval path doesn’t include you.

You can be perfectly optimized for search engine recruitment and completely absent from the knowledge graph and the LLM training corpus. In a multiplicative system, that gap compounds with every cycle.

Most of the AI tracking industry is measuring outputs without diagnosing inputs, tracking what AI says about you at display when the decisions were already made upstream. That’s like checking your blood pressure without diagnosing the underlying condition.

The tools to do this properly are emerging. Authoritas, for example, can inspect the network requests behind ChatGPT to understand which content is actually formulating answers. But the real work is at the gates upstream of display, where your content either passed or stalled before the engine ever opened its mouth.

Audit your pipeline: Earliest failure first

The correct audit order is pipeline order. Start at discovery and work forward.

If content isn’t being discovered, nothing downstream matters. If it’s discovered but not selected for crawling, rendering fixes are wasted effort. If it’s crawled but renders poorly, every annotation and grounding decision downstream inherits that degradation.

This is your new plan: Find the weakest gate. Fix it. Repeat.

The inconsistency Fishkin documented is a training deficit. The AI engine pipeline is trainable. The training compounds. The walled gardens increase their lock-in with every cycle.

The brand that trains its AI salesforce better than the competition doesn’t just win the next recommendation. It makes the next one easier to win, and the one after that, until the gap widens to the point where competitors can’t close it without starting from scratch.

Without entity understanding, nothing else in this pipeline works. The system needs to know who you are before it can evaluate what you publish. Get that right, build from the brand up through the funnel, and the compounding does the rest.

Next: The five infrastructure gates the industry compressed into ‘crawl and index’

The next piece opens the infrastructure gates in full: rendering fidelity, conversion fidelity, JavaScript as a favor, not a standard, structured data as the native language of the infrastructure phase, and the investment comparison that puts numbers on improving gates versus skipping them entirely. 

The sequential audit shows where your content is dying before the algorithm ever sees it, and once you see the leaks, you can start plugging them in the order that moves your surviving signal the most.

This is the third piece in my AI authority series. The first, “Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it,” introduced cascading confidence. The second, “AAO: Why assistive agent optimization is the next evolution of SEO” named the discipline. 

Google Ads’ three-strikes system: Managing warnings, strikes, and suspension

3 March 2026 at 19:00
Google Ads’ three-strikes system- How to avoid account suspension

Every year, Google suspends tens of millions of Google Ads accounts for advertising policy violations. One specific policy area that confuses many legitimate advertisers is Google’s “three-strikes” system.

Essentially, if Google decides your account has repeatedly violated any of 15 specific Google advertising policies, you’re at risk for temporary (and potentially permanent) suspension of your Google Ads account.

To help you prevent a single policy issue from snowballing into a full account suspension, here’s how Google’s three-strike system works and what you should do at every stage to keep your ads running.

Case study: Appealing a Google Ads strike

Over the past 10+ years, I’ve helped thousands of advertisers identify and resolve Google’s policy concerns so that their businesses can resume running ads. One such situation involved helping a business that sells ceremonial swords for military dress uniforms.

Google’s Other Weapons policy prohibits advertising swords intended for combat. However, that same policy permits the advertising of non-sharpened, ceremonial swords, which is what this business sells. Even though this business was properly advertising its products within Google’s ad policy parameters, Google issued them a warning for violating the Other Weapons policy.

After the warning, we documented for Google that the business wasn’t violating Google’s policy. We also added specific disclaimers to the business’s sword product pages, noting that the swords were only ceremonial. Frustratingly, Google decided to issue a first strike to the business anyway. 

We appealed the strike because the business wasn’t violating Google’s policy. But Google quickly denied that appeal. We tried appealing again, and Google denied the second appeal. The ad account remained on hold with no ads serving, and the business was losing revenue.

Ultimately, we had to “acknowledge” the strike to Google (I’ll explain what that means later) so that the ads would resume serving. We then worked with Google to craft more precise disclaimer language, stating that the swords for sale were ceremonial blades and not sharpened for use as weapons. This disclaimer was added to the business’s website footer so that both Google’s robots and human reviewers could see it on every single page (regardless of whether swords were for sale on a particular page).

Because of all these changes, Google’s concerns were satisfied and the business has never received any subsequent warnings or strikes. The end result was a success, even though technically there should never have been a warning or strike issued because an actual policy violation never occurred.

Key takeaway: Google will sometimes incorrectly issue warnings and strikes, and even reject appeals, and will often require excessive website disclaimers to convince them that all is well.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Navigating Google’s three-strikes system

Understanding Google’s strikes system can save your ads account from suspension. The search giant adheres to a system that begins with an initial warning and is followed by a “three strikes and you’re out” protocol.

The warning: Your ‘mulligan’ opportunity

Before issuing your ad account an initial strike, Google will first send you a warning notification.

This warning informs you that there’s a problem and allows you to address and resolve Google’s concern before your account is penalized with an official strike.

  • The penalty: None (yet). Your ads can continue to run.
  • What to do: Appeal any ad/asset disapprovals if you’re confident Google made a mistake, or identify the issue and replace the disapproved ads/assets with fully compliant versions

Treat warnings seriously — ignoring them likely ensures your account will begin receiving strikes.

Strike 1: At least three days without ads

If Google decides that the same policy violation still exists after a warning was issued, your ad account will receive its first official strike.

  • The penalty: All ads will stop serving for three full days.
  • What to do: Acknowledge or appeal the strike.

Acknowledge the strike

This is your fastest path back to serving ads. But Google counts strikes as cumulative over a 90-day period.

If you acknowledge the strike rather than successfully appeal it, you’ve started the clock on the possibility of three strikes and a permanent suspension. Deciding which approach is best is a case-by-case determination.

To acknowledge the strike, you must:

  • Remove all ads/assets that violate Google’s cited policy
  • Submit Google’s acknowledgment form confirming that:
    • You understand the policy Google says you violated.
    • You have removed all violations.
    • You will comply with Google’s policies from now on.

After you acknowledge the strike and the three-day hold ends, your ads will resume serving.

Appeal the strike

Submit this appeal form and explain why your ads aren’t violating Google’s policy. Keep in mind:

  • Your account remains on hold during Google’s review.
  • Reviews typically take 5+ business days, so be patient.
  • If Google accepts your appeal, they will remove the hold and your ads will resume serving.
  • If Google rejects your appeal, your account will stay on hold and no ads will serve.
  • After a rejected appeal, you can attempt appealing again or acknowledge the strike.

Appealing is often justified, but it costs time and success isn’t guaranteed (even if you’re in the right, as the earlier case study shows).

Get the newsletter search marketers rely on.


Strike 2: At least seven days without ads

If Google decides there’s been another policy violation within 90 days of resolving your first strike, or if your original violation was unresolved during those 90 days, your account will receive a second strike.

  • The penalty: All ads will stop serving for seven full days.
  • What to do: Your options are the same as for Strike 1: acknowledge or appeal the strike.

Strike 3: Your account is suspended

If Google decides there’s been another policy violation within 90 days of resolving your second strike, or if your previous violation was unresolved during those 90 days, your account will receive a third strike.

  • The penalty: Your account is suspended, and you may not run any ads or create a new ad account.
  • What to do: Your only recourse now is to appeal the suspension.

Successfully appealing a suspension is definitely possible. But the process is often a nightmare, and the results are never guaranteed.

Important: Once suspended, you’re unable to make any changes to your ad account.

Dig deeper: Dealing with Google Ads frustrations: Poor support, suspensions, rising costs

Exceptions to the rules

Google is sometimes inconsistent at following their own rules. Here are two examples I’ve seen first-hand.

Successfully appealing a strike doesn’t always reset the 90-day clock

I have a client who acknowledged a first strike on June 25. They received a second strike on July 26, which they successfully appealed. You would think that should reset the 90-day counter back to June 25.

However, Google gave them another second strike on October 16, far beyond 90 days from the date of the first strike, but within 90 days from the date of the “first” second strike, which they successfully appealed.

Google sometimes automatically returns your account to ‘warning’ status after a first strike expires

I have a client who received a warning on August 7, followed by a first strike on September 7. They acknowledged the first strike, and that strike expired on December 6, 90 days after it was issued.

However, the account immediately reentered “warning” status, with a new 90-day clock starting from when the first strike expired. There was no new email notification about this warning, and the warning didn’t appear on the Strike history tab.

Get the newsletter search marketers rely on.


Common questions about Google Ads strikes

How do I know if I received a strike?

  • Look for an email notification from Google.
  • Look for a notification at the top of your Google Ads account.
  • Check the Policy manager page in your Google Ads account.

How do I see my history of strikes?

  • Go to the Strike history tab on the Policy manager page in your Google Ads account.

Can you get a strike without having ad disapprovals?

  • Yes. Google can issue strikes even if no ads are formally disapproved.

How are Google’s three- and seven-day ad holds calculated?

  • Google counts full days. For example, if you receive and acknowledge a first strike (a three-day hold) on January 1, your ads won’t be eligible to resume serving until January 4th.

Are account strikes worse than ad disapprovals?

  • Yes, account strikes are significantly worse than individual ad disapprovals. A strike prevents all your account’s ads from serving and can easily escalate to a full account suspension.

Which Google policies have the three-strikes rule?

  • Enabling dishonest behavior.
  • Unapproved substances.
  • Guns, gun parts, and related products.
  • Explosives.
  • Other weapons.
  • Tobacco.
  • Compensated sexual acts.
  • Mail-order brides.
  • Clickbait.
  • Misleading ad design.
  • Bail bond services.
  • Call directories, forwarding services, and recording services.
  • Credit repair services.
  • Binary options.
  • Personal loans.

Important: If you violate one of Google’s many other policies not listed above, you could find your ad account suspended immediately, with no warning or three-strikes system.

Dig deeper: Google Ads boosts accuracy in advertiser account suspensions

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What you can do to prevent and navigate Google Ads strikes

Follow these best practices and tips to minimize the chances of receiving a Google Ads strike:

  • Read the Google Ads policies that apply to your industry so that you know what to do and what not to do.
  • Delete old ads and assets you no longer need, so they can’t trigger strikes unexpectedly.
  • Add clear and comprehensive disclaimers to your website that will help Google understand you’re complying with any ad policies you think they might otherwise decide you aren’t.
  • Save copies of any appeals you submit because Google won’t show them to you after they’re submitted.
  • If you receive an account strike, closely monitor the 90-day clock so you know when you’re safely out of the previous “strike” window.

Google understandably cares deeply about its reputation and the safety of its users. That’s why Google’s policy team often strictly enforces its advertising policies, and why they’re sometimes over-aggressive when interpreting and applying their own policy language.

To keep our Google Ads accounts in good health and our ads running, the best thing we can do as advertisers is to deeply understand Google’s advertising policies and requirements.

Always be ready to jump through hoops to explain your unique situations, and over-comply with Google’s edicts whenever feasible. 

Here’s hoping you never see a third strike!

Meta introduces click and engage-through attribution updates

3 March 2026 at 19:00
Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Meta is updating its ad measurement framework, aiming to simplify attribution in what it calls a “social-first” advertising world.

What’s happening. Meta is narrowing its definition of click-through attribution for website and in-store conversions. Going forward, only link clicks — not likes, shares, saves or other interactions — will count toward click-through attribution. The change is designed to reduce discrepancies between Meta Ads Manager and third-party tools like Google Analytics.

Between the lines. Social media has overtaken search as the world’s largest ad channel, according to WARC, but many attribution systems were built for search-era behaviors. On social platforms, engagement extends beyond link clicks. Historically, Meta counted all click types toward click-through conversions, while many third-party tools only counted link clicks — creating reporting misalignment.

What’s changing. Conversions previously attributed to non-link interactions will now fall under a renamed “engage-through attribution” (formerly engaged-view attribution). Meta is also shortening the video engaged-view window from 10 seconds to 5 seconds, reflecting faster conversion behavior — particularly on Reels. The company says 46% of Reels purchase conversions happen within the first two seconds of attention.

Why we care. This update makes it easier to see which actions actually drive conversions, reducing confusion between Meta reporting and third-party analytics like Google Analytics. By separating link clicks from other social interactions, marketers get a clearer view of campaign performance, while the new engage-through attribution captures the value of likes, shares, and saves.

This gives advertisers more confidence in their data and helps them make smarter, more impactful

Third-party tie-ins. Meta is partnering with analytics providers like Northbeam and Triple Whale to incorporate both clicks and views into attribution models, aiming to give advertisers a more complete performance picture.

The rollout. Changes will begin later this month for campaigns optimizing toward website or in-store conversions. Billing will not change, but reporting inside Ads Manager may shift as attribution definitions update.

The bottom line. Meta is attempting to balance clearer, search-aligned click reporting with better visibility into uniquely social interactions — giving advertisers cleaner comparisons across platforms while still capturing the incremental impact of engagement-driven conversions.

Dig deeper. Simplifying Ad Measurement for a Social-First World

Content marketing in an AI era: From SEO volume to brand fame

3 March 2026 at 18:00
Content marketing in an AI era- From SEO volume to brand fame

For more than a decade, the dominant model was simple — identify a keyword, write an article, publish, promote, rank, capture traffic, convert a fraction of visitors, and repeat. But that model is breaking. 

Content marketing is collapsing and rebuilding simultaneously. AI systems now answer informational queries directly inside search results. Large language models (LLMs) synthesize known information instantly. Information production is accelerating faster than distribution capacity. Public feeds are already saturated.

The cost of producing content has fallen to nearly zero, while the cost of being seen has never been higher. That changes everything.

Here’s a system for content marketing in a world where being found is increasingly unlikely.

The decline of informational SEO

Informational SEO used to be treated as a growth opportunity. Publish enough articles targeting informational queries, and traffic would compound. 

But traffic was always a proxy metric. It felt productive because dashboards moved. In reality, most content was never read deeply, rarely linked to, and often indistinguishable from competitors. Page 1 often contained 10 variations of the same article, each rewritten with minor differences.

Now, AI answers absorb demand directly. Users receive summaries without clicking. The known information layer of the web is becoming commoditized.

If your strategy relies on answering known informational questions, you’re competing with a machine trained on the entire web. Informational SEO is over as a strategy.

Search content will still matter, but its role shifts. It becomes closer to customer service and sales enablement. It exists to support conversion once intent is clear. It doesn’t build fame.

Content marketing, properly understood, must do something else entirely.

Dig deeper: The dark SEO funnel: Why traffic no longer proves SEO success

All content marketing is advertising

Growth hackers came in and took over SEO. Driven by the desire to show impressive charts to the board, they turned SEO from a practical channel into a landfill of skyscrapered, informational content that did little for real growth.

So, we need a reset. There are only two reasons to create content:

  • You’re in the publishing business.
  • You’re marketing a business.

If you’re in the second category, your content is advertising. That doesn’t mean banner ads. It means its job is to build mental availability. As advertising science has repeatedly shown, brands grow by increasing the likelihood of being thought of in buying situations and making themselves easy to purchase from.

The advertising analytics company System1 describes the three drivers of profit growth from advertising as fame, feeling, and fluency.

  • Fame means broad awareness.
  • Feeling means positive emotional association.
  • Fluency means easy recognition and processing.

If your content doesn’t contribute to those outcomes, it’s activity and not helping your growth.

SEO teams optimized for clicks, but clicks aren’t the objective. Being remembered is. In an AI era, this distinction becomes decisive.

Dig deeper: Fame engineering: The key to generative engine optimization

From pull to push content

Historically, content marketing relied heavily on pull: Someone searched, you ranked, and you pulled them from Google to your website. That channel is narrowing.

As AI summaries answer queries directly, the ability to pull strangers through informational search decreases. Pull remains critical for transactional queries and high-intent keywords, but the gravitational pull of informational content is weakening.

Push becomes more important. You have to push your content to people, distributing it intentionally through media, partnerships, events, advertising, communities, and networks rather than waiting to be discovered. It must be placed directly in front of people.

The paradox is this: We once believed gatekeeping had disappeared. Social media and Google created the illusion of fair and direct access. Now, gatekeepers are back — algorithms, publishers, influencers, media outlets, and even AI systems themselves.

When channels are flooded, selection mechanisms tighten.

Dig deeper: Why your content strategy needs to move beyond SEO to drive demand

The scarcity of being found

Kevin Kelly wrote in his book “The Inevitable” that work has no value unless it’s seen. An unfound masterpiece, after all, is worthless.

As tools improve and creation becomes frictionless, the number of works competing for attention expands exponentially, with each new work adding value while increasing noise.

Kelly’s point was that in a world of infinite choice, filtering becomes the dominant force. Recommendation systems, algorithms, media editors, and social networks become the arbiters of visibility. When there are millions of books, songs, apps, videos, and articles, abundance concentrates attention, creating a structural shift.

When production is scarce, quality alone can surface work. When production is abundant, discoverability depends on networks, signals, and amplification. The value is migrating from creation to curation and distribution. In practical terms, every additional AI-generated article makes it harder for any single article to be noticed.

The supply curve has shifted outward dramatically. Demand hasn’t. Human attention remains finite. As supply approaches infinity and attention remains fixed, the probability of being found declines.

Being found is now an economic problem of scarcity rather than a technical exercise in optimization. When production is abundant, attention is scarce. When attention is scarce, distinctiveness and distribution become currency.

Dig deeper:

Get the newsletter search marketers rely on.


Powerful messaging in an age of abundance

This is where Rory Sutherland’s concept of powerful messaging becomes essential for us. In his book, “Alchemy,” he argues that rational behavior conveys limited meaning.

When everything is optimized, efficient, and frictionless, nothing signals importance. Powerful messages must contain elements of absurdity, illogicality, costliness, inefficiency, scarcity, difficulty, or extravagance — qualities that serve as signals. They tell the market that something matters.

Consider a wedding invitation. The rational option is an email — instant, free, and efficient. Yet most couples choose heavy paper, embossed type, textured envelopes, even wax seals. The cost and inefficiency are the point. They signal commitment and create emotional weight. The medium amplifies the meaning. 

The same logic applies to marketing. When everyone can publish a competent article in seconds, competence carries no signal. A 1,000-word blog post answering a known question communicates efficiency, not importance. Scarcity and effort change perception.

MrBeast built early fame by counting to extreme numbers on camera. The act was irrational. It was inefficient and difficult. That difficulty was the hook. It signaled commitment and created memorability. The content spread not because it was informational, but because it was remarkable.

In an AI-saturated environment, rational content becomes invisible. If 10,000 companies publish summaries of the same topic, none stand out.

But if one brand commissions original research, prints a limited run of a physical report, hosts a live event around the findings, and strategically distributes it, the signal is different. The effort itself becomes part of the message.

Scarcity also changes economics. Sherwin Rosen’s work on the economics of superstars demonstrated that small differences in recognition can lead to disproportionate returns because markets reward the most recognized participants disproportionately.

Moving from being chosen 1% of the time to 2% can double outcomes because fame compounds. In crowded markets, the most recognized option captures an outsized share and reinforces its own dominance.

This is why being found is fundamentally different now. In the past, discoverability was a function of production and optimization. Today, it hinges on distinctiveness and signal strength. When production approaches zero cost, attention becomes the only scarce resource, which means you should be aiming for fame rather than optimization.

Dig deeper: Revisiting ‘useful content’ in the age of AI-dominated search

Fame as a strategic objective

Paul Feldwick, in “Why Does The Pedlar Sing?” argues that fame is built through four components:

  • The offer must be interesting and appealing.
  • It must reach large audiences.
  • It must be distinctive and memorable.
  • The public and media must engage voluntarily.

These four elements provide a practical framework for content marketing in an AI era. Here’s how that works in practice.

Create something interesting

You must create new information, not restate existing information. That could mean:

  • Proprietary data studies.
  • Original research.
  • Indexes updated annually.
  • Experiments conducted publicly.
  • Tools that solve real problems.
  • Physical artifacts with limited distribution.
  • Events that convene a specific community.

Consider the origins of the Michelin Guide. A tire company created a restaurant guide that became a cultural authority.

Awards ceremonies, industry rankings, annual reports, and indexes all function as content marketing. These are fame engines.

The key is the perception of effort and distinctiveness. A limited-edition printed book sent to 100 target prospects can carry more weight than 1,000 blog posts. Costliness signals meaning.

Reach mass or concentrated influence

Interest without distribution is invisible. Distribution options include:

  • Media coverage.
  • Partnerships.
  • Paid advertising.
  • Events.
  • Webinars.
  • Physical mail.
  • Community amplification.

If you lack a budget, focus on the smallest viable market. Concentrate on a defined audience and saturate it. 

Many iconic technology companies began by dominating narrow communities before expanding outward. Public relations and content marketing converge here. 

  • Earned media multiplies reach. 
  • Paid media accelerates it. 
  • Community activation sustains it.

If your content is never placed intentionally in front of people, it can’t build fame.

Be distinctive and memorable

SEO content historically failed on distinctiveness. Ten articles answering the same question looked interchangeable. But in an AI era, repetition disappears into the model. 

Distinctiveness can come from:

  • A recurring annual report with a recognizable format.
  • A proprietary scoring system.
  • A unique visual identity.
  • A specific tone.
  • A tool that becomes habitual.
  • An award or certification owned by your brand.

Memorability drives mental availability. Fluency increases recall. When someone recognizes your brand instantly, you reduce cognitive effort. Repetition of distinctive assets compounds over time.

You have to continually go to market with distinctive, memorable content. If you don’t do this, you will fade in memory and distinctiveness.

Enable voluntary engagement

You can’t force people to share, but you can design for shareability. Content spreads when it carries social currency, enhances the sharer’s identity, rewards participation, and makes access feel exclusive.

Referral loops, limited access programs, community recognition, and public acknowledgment can all increase spread. The key is that the message must move freely between humans. It must be portable, discussable, and referencable.

Memetics matters. If it can’t be passed along, it can’t compound. 

Dig deeper: The authority era: How AI is reshaping what ranks in search

Operationalizing fame in search marketing

If content must be designed for distinctiveness, distribution, and voluntary engagement, search leaders need a different playbook. Here’s a five-step framework.

Step 1: Separate infrastructure from fame

Maintain search infrastructure for high-intent queries, optimize product pages, support conversion, and provide clear answers where necessary. But stop confusing informational volume with brand growth.

Audit your content portfolio. Identify what builds mental availability and what merely fills space to reduce waste.

Step 2: Invest in originality

Allocate budget to proprietary research, data collection, and creative initiatives. If everyone can generate competent summaries, originality becomes leverage.

This may require shifting the budget from content volume to creative depth.

Step 3: Design for distribution first

Before creating content, define distribution.

  • Who needs to see this?
  • How will it reach them?
  • Which gatekeepers matter?
  • What media outlets might care?

Reverse engineer reach.

Step 4: Build distinctive assets

Create repeatable formats that become associated with your brand.

  • An annual index.
  • A recurring event.
  • A recognizable report structure.
  • A named methodology.

Consistency builds fluency.

Step 5: Measure fame

Track:

  • Brand search volume.
  • Direct traffic growth.
  • Share of voice in media.
  • Unaided awareness, where possible.

Traffic alone is insufficient.

If content doesn’t increase the probability that someone thinks of you in a buying moment, it’s not performing its primary job.

Dig deeper: Why creator-led content marketing is the new standard in search

The return of creativity

We’re entering a period where automation handles the average, freeing humans to focus on the exceptional. The future of content marketing isn’t high-volume AI-generated articles. It’s the creation of new information, new experiences, new events, and new signals that machines can’t fabricate credibly.

It requires a partnership with PR, a strategic use of physical and digital channels, disciplined distribution, and a commitment to fame. Budgets will need to shift from volume production to creative impact.

In a world where information is infinite and attention is finite, the brands that win will be those that understand that being found is more valuable than being published. Content marketing in the AI era isn’t about producing more. It’s about becoming known.

4 CRO strategies that work for humans and AI

3 March 2026 at 17:00
CRO for AI vs. humans- Do you really need different strategies?

What do conversion rate optimization (CRO) and findability look like for an AI agent versus a human, and how different do your strategies really need to be?

More and more marketers are embracing the agentic web, and discovery increasingly happens through AI-powered experiences. That raises a fair question: what does CRO and findability look like for an AI agent compared with a human?

Several considerations matter, but the core takeaway is clear: serving people supports AI findability. AI systems are designed to surface useful, grounded information for people. Technical mechanics still matter, but you don’t need entirely different strategies to be findable or to improve CRO for AI versus humans.

What CRO looks like beyond the website

If a consumer does business directly through an agent or an AI assistant, your business needs to make the right information available in a way that can be understood and used. Your products or services need to be represented through clean, well-structured data, with information formatted in ways that downstream systems can process reliably.

As more people explore doing business with AI assistants, part of the work involves making sure your products and services can connect cleanly. Standards, such as Model Context Protocol (MCP), can help by enabling agents to interact with shared sources of information.

In many cases, a human may still decide to engage directly on a brand’s site. In that context, content and formatting choices matter. Whether you focus on paid media or organic, ensuring your humans can take desired actions — and will want to — is important.

Dig deeper: Are we ready for the agentic web?

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Optimization 1: How much text is on the page?

Old‑school SEO encouraged the idea that more keywords and larger walls of text would perform better. That approach no longer holds.

Wayfair does a great job using accessible fonts, a call to action when the user shifts to a transactional mindset, and easy-to-understand language.
Wayfair does a great job using accessible fonts, a call to action when the user shifts to a transactional mindset, and easy-to-understand language.

Both humans and AI systems tend to work better with clearly structured, modular content. Large blocks of uninterrupted text can be harder for people to scan and understand. Clear sections, spacing, layout, and visual hierarchy help users quickly understand what they can do and how to accomplish the goal that brought them to the page.

There’s no fixed minimum or maximum amount of text that works best. You should use the amount of content needed to clearly explain what you offer, why it’s useful, and what sets it apart.

A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.
A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.

A technical topic will need more text, broken into smaller paragraphs. There are great calls to action as well.

Visual components can be helpful when paired with useful alt text. Lead gen forms should be easy for humans to complete and regularly audited for spam or friction. Content that’s hard for people to use is also harder for automated systems to interpret as helpful or relevant.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Optimization 2: How are you communicating with your humans?

One of the best ways to communicate clearly to systems is to communicate clearly to people. Lean into what makes you an expert, but avoid unnecessary jargon or overly complex language. Descriptions should stay specific, accurate, and on-brand.

A simple gut check: if a 10-year-old couldn’t broadly understand what you do, why it matters, and how to engage with you, you’re probably making things harder than necessary. Even though AI systems are sophisticated, clarity still matters because the goal is ultimately to support a human outcome.

If you’re unsure, try putting your positioning copy into an AI assistant and asking it to critique its clarity. Ask for simplification and clearer explanations, not for new claims or embellishment.

Visual components matter here as well. Comparison tables can help when they genuinely support understanding, but they can hurt when they’re used as a gimmick rather than a guide. Accessibility principles matter, too. Color contrast, readable font sizes, and restrained font choices reduce the risk that someone can’t process your site.

IAMS has a thoughtful quiz to find the right dog breed and offers additional close matches. High-contrast color, easy-to-understand buttons, and high-quality photos help.
IAMS has a thoughtful quiz to find the right dog breed and offers additional close matches. High-contrast color, easy-to-understand buttons, and high-quality photos help.

Images should be easy to understand and clearly connected to the surrounding text. Alt text helps people using assistive technologies and reinforces the relationship between visuals and written content.

Get the newsletter search marketers rely on.


Optimization 3: The call to action

A user comes to your site to do something. They might want to buy, request a quote, or speak with your team. That action should be clear.

When the intended action is unclear, it becomes harder for both people and automated systems to understand what your site enables.

Tarte Cosmetics does a great job of leaning into CRO principles, including inclusivity, accessibility, and social proof.
Tarte Cosmetics does a great job of leaning into CRO principles, including inclusivity, accessibility, and social proof.

Shopping experiences tend to surface in conversations with shopping intent because assistants are trying to complete the task they were given. If it’s unclear how to add an item to a cart or complete a purchase, you make it harder for a human to do business with you. You also make it harder for systems to understand that you’re a transactional site rather than a catalog of items without a clear path forward.

Lead generation requires similar clarity. If the goal is to talk to your team, include a phone number that can be clicked to call. You might also include a form that submits directly into your lead system or a flow that opens an email client. Forcing users through multiple form pages often frustrates people and adds unnecessary complexity to the experience.

Dig deeper: 6 SEO tests to help improve traffic, engagement, and conversions

Optimization 4: The technical fixes

I cover technical considerations last for a reason. The most important work you can do is support the humans you serve. Technical improvements help, but they rarely succeed on their own.

Tips from the Microsoft AI guidebook. (Disclosure: I’m the Ads Liaison at Microsoft Advertising.)

Excessive imagery, low contrast between text and background, or unstable layouts can create challenges.

Make sure your site renders consistently and meaningfully. Large layout shifts after load, measured in cumulative layout shift (CLS), can frustrate users. Pages overloaded with ads or pop-ups can distract from the reason someone arrived in the first place and may introduce trust concerns.

Security matters as well. Malware warnings, broken rendering, or incomplete page loads can raise red flags for both users and automated systems.

Microsoft Bing Webmaster Tools - AI Performance tab

Tools like IndexNow can help notify search systems of content changes more quickly. Microsoft Clarity is a free tool that shows how users behave on your site, surfacing friction you might otherwise miss. This includes Brand Agents that help your humans have more meaningful chatbot experiences.

Microsoft Clarity with Copilot

One useful check is to review how your site appears when used as input for ad platforms or auto-generated creative tools, such as Performance Max campaigns or audience ads.

Review your ads - Microsoft

These can provide a helpful lens into how platforms interpret your content. When the resulting positioning and creative align with what you intend, you’re usually doing a good job serving both crawlers and people. When they don’t, it’s often a signal to revisit clarity, structure, or user flow.

Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What does CRO for AI and for humans look like?

Humans and AI systems need many of the same things when it comes to CRO:

  • Information should be clear and accurate.
  • It should be easy to do the thing the user came to do.
  • The site should avoid deceptive or manipulative patterns.
  • The experience should build trust rather than undermine it.

Remember these CRO fundamentals that carry over:

  • Humans and AI benefit from the same clarity-first approach to CRO.
  • Information should be specific, grounded, and easy to understand.
  • Actions should be obvious and easy to complete.
  • Technical choices should support, not undermine, the experience.

When those fundamentals are in place, you’re supporting both human outcomes and AI-driven discovery.

Google launches non-skippable Video Reach campaigns for connected TV

3 March 2026 at 15:46
Google TV: What you need to know CTV buying in Google Ads

Google is rolling out Video Reach Campaign (VRC) Non-Skip ads, expanding how brands reach connected TV audiences on YouTube.

What’s happening. VRC Non-Skips are now live globally in Google Ads and Display & Video 360. Built for the living room experience, they run as non-skippable placements optimized for connected TV (CTV) screens.

Why we care. YouTube has been the No. 1 streaming platform in the U.S. for three straight years, making the TV screen a critical battleground for your brand budget. With guaranteed, non-skippable delivery, you can ensure your full message reaches viewers in premium, lean-back environments.

AI in the mix. Google AI dynamically optimizes across 6-second bumper ads, 15-second standard spots, and 30-second CTV-only non-skippable formats. Instead of manually splitting your budget by format, you can rely on AI to allocate impressions for maximum reach and efficiency.

Bottom line. Advertisers now have a simpler way to secure guaranteed, full-message delivery on the biggest screen in the house — using AI to maximize reach and efficiency across non-skippable formats without manually managing the mix.

Google’s announcement. VRC Non-Skip ads are now generally available, allowing brands to reach TV audiences with Google AI.

Before yesterdaySearch Engine Land

Google expands recurring billing policy

2 March 2026 at 22:38
In Google Ads automation, everything is a signal in 2026

Google is expanding its recurring billing policy to allow certified U.S. online pharmacies to promote prescription drugs with subscriptions and bundled services.

What’s happening. Certified merchants can now offer:

  • Prescription drug subscriptions — recurring billing for prescription medications.
  • Prescription drug bundles — combining drugs with services like coaching or treatment programs, as long as the drug is the primary product.
  • Prescription drug consultation services — recurring consults to determine prescription eligibility, either standalone or bundled with medications.

Requirements for eligibility. Merchants must maintain certified status, submit subscription costs in Merchant Center using the [subscription_cost] attribute, include clear terms and transparent fees on landing pages, and comply with all existing Healthcare & Medicine and recurring billing policies. Accounts previously disapproved can request a review once requirements are met.

Why we care. The update opens new revenue opportunities for online pharmacies, letting them leverage recurring models and bundled services while staying compliant with Google policies.

The bottom line. Certified U.S. online pharmacies can now run recurring prescription and bundled offers, giving them more flexibility to reach patients and scale subscription-based services.

Dig deeper. Recurring billing policy expansion: Prescription drugs

Google uses both schema.org markup and og:image meta tag for thumbnails in Google Search and Discover

2 March 2026 at 22:19

Google updated both its image SEO best practices and Google Discover help documents to clarify that Google uses both schema.org markup and the og:image meta tag as sources when determining image thumbnails in Google Search and Discover.

Image SEO best practices. Google added a new section to the image SEO best practices help document named Specify a preferred image with metadata. In that section, Google wrote:

  • “Google’s selection of an image preview is completely automated and takes into account a number of different sources to select which image on a given page is shown on Google (for example, a text result image or the preview image in Discover).”
  • Here is how you influence the thumbnails Google chooses:
    • Specify the schema.org primaryImageOfPage property with a URL or ImageObject.
    • Or specify an image URL or ImageObject property and attach it to the main entity (using the schema.org mainEntity or mainEntityOfPage properties)
    • Specify the og:image meta tag.

Here are the overall best practices when choosing these methods:

  • Choose an image that’s relevant and representative of the page.
  • Avoid using a generic image (for example, your site logo) or an image with text in the schema.org markup or og:image meta tag.
  • Avoid using an image with an extreme aspect ratio (such as images that are too narrow or overly wide).
  • Use a high resolution, if possible.

Google Discover image selection. In the Discover documentation Google added a section that reads:

  • “Include compelling, high-quality images in your content that are relevant, especially large images that are more likely to generate visits from Discover. We recommend using images that meet the following specifications: At least 1200 px wide, High resolution (at least 300K) and 16×9 aspect ratio”
  • “Google tries to automatically crop the image for use in Discover. If you choose to crop your images yourself, be sure your images are well-cropped and positioned for landscape usage, and avoid automatically applying an aspect ratio. For example, if you crop a vertical image into 16×9 aspect ratio, be sure the important details are included in the cropped version that you specify in the og:image meta tag).”
  • “Enabled by the max-image-preview:large setting, or by using AMP
  • “Use either schema.org markup or the og:image meta tag to specify a large image that’s relevant and representative of the web page, as this can influence which image is chosen as the thumbnail in Discover. Learn more about how to specify your preferred image. Avoid using generic images (for example, your site logo) in the schema.org markup or og:image meta tag. Avoid using images with text in the schema.org markup or og:image meta tag.”

Why we care. Images can have a big impact on click-through rates from both Google Search and Google Discover. Here, Google is telling us ways we can encourage Google to select a specific image for that thumbnail. So review these help documents and see if any of this can help you with the images Google selects in Search and Discover.

Own your branded search: Building a competitive PPC defense

2 March 2026 at 20:00
Own your branded search: Building a competitive PPC defense

If you’re not actively managing your branded search campaigns, you’re leaving money on the table and your reputation in the hands of competitors, review aggregators, and affiliate marketers. 

Brand protection through PPC isn’t just about bidding on your own name. It’s a strategy that spans defensive bidding, query monitoring, ad copy testing, and reputation management across the entire customer research journey.

Why brand search deserves more than basic defense

Most PPC managers treat brand campaigns as an afterthought. Set up a campaign, bid on the exact brand name, maybe add some close variants, and call it done. 

But the reality is far more complex, especially when we’re talking about bigger, well-known brands. Your brand exists across dozens of query contexts, each representing a different stage of the customer journey and requiring a different strategic approach.

Consider what happens when someone searches for your brand. They’re not just typing your company name, they’re asking questions, seeking validation, comparing alternatives, and researching specific features. 

If you’re only covering exact-match brand terms, you’re missing the majority of brand-related searches and leaving those high-intent users exposed to competitor messaging.

Third-party sites like review aggregators and affiliate comparison websites actively bid on your brand terms to capture traffic and redirect it to their comparison pages, where your competitors pay for prominence. 

The cost? Your brand equity, customer trust, and ultimately, conversion rates.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

4 categories of branded searches you need to cover

Based on user intent and competitive vulnerability, branded searches fall into four strategic categories. Each requires different bid strategies, ad copy approaches, and landing page experiences. 

Let’s break down each category and the specific PPC tactics that can work.

Brand trust and reputation queries

  • “Is [Brand] good?”
  • “[Brand] reviews.”
  • “Is [Brand] legit?”
  • “Is [Brand] worth it?”

These searchers are in the validation phase. They’ve heard of your brand but want social proof before committing. 

The competitive threat here comes from review aggregators and affiliate sites that will happily show your reviews alongside competitor CTAs.

PPC strategy

  • Bid aggressively — these are high-intent users who are close to converting.
  • Use review extensions and star ratings in your ads.
  • Highlight trust signals in ad copy (years in business, customer count, awards).
  • Send users to dedicated testimonial or case study landing pages, not your homepage.
  • Test callout extensions with specific proof points.

Product features queries

  • “What is [Brand] known for?”
  • “Pros and cons of [Brand].”
  • “Does [Brand] offer [feature]?”

Users searching for feature-specific information are evaluating whether your solution meets their requirements. Competitors often bid on these queries with ads suggesting they offer superior features.

PPC strategy

  • Create feature-specific ad groups with tailored ad copy.
  • Use sitelink extensions to direct users to specific feature pages.
  • Address the specific feature in headline 1, don’t waste space on your brand name.
  • Include feature demos or video on the landing page.
  • Test whether these queries warrant higher bids than core brand terms.

Comparison queries

  • “Alternatives to [Brand].”
  • “How does [Brand] compare?”
  • “Is [Brand] better than [Competitor]?”
  • “Is [Brand] right for [use case]?”

This is the most competitive category. Users are actively comparing you to alternatives, and both direct competitors and third-party comparison sites are bidding heavily. This is where you’re most vulnerable to losing customers who were already considering you.

PPC strategy

  • Bid at or above top-of-page estimates to maintain Position 1.
  • Create dedicated comparison landing pages for each major competitor.
  • Include pricing transparency if it’s a competitive advantage.
  • Monitor auction insights obsessively to identify new competitive threats.
  • Consider category-level comparison ads for “best [category] tools/products” searches.

Niche questions

  • “Is [Brand] expensive?”
  • “Does [Brand] offer discounts?”
  • “Is [Brand] secure?”

These queries reveal specific concerns or evaluation criteria. They’re often low-volume but extremely high-intent because they represent genuine decision-making criteria.

PPC strategy

  • Develop FAQ landing pages that address multiple related concerns.
  • Test lower bids — these queries often have less competition.
  • Use search query reports to identify emerging concerns and address them proactively.

Dig deeper: How to benchmark PPC competitors: The definitive guide

Advanced brand campaign architecture

The traditional single-brand campaign approach doesn’t give you enough control or insight at scale. Instead, structure your brand defense across four specialized campaigns, each targeting different intent signals and requiring distinct bid strategies.

Core brand defense 

This covers exact-match brand terms and common misspellings with aggressive bidding to maintain 95%+ impression share and top positions. Never let this campaign be budget-limited. 

Use multiple RSAs to test different value propositions. Monitor lost impression share due to rank as your primary competitive threat indicator.

Brand + category 

Capture phrase-match queries like “[Brand] CRM” or “[Brand] for [use case],” where users are researching you within a specific product context. 

Bid slightly lower than core brand terms, but ensure ad copy acknowledges the category and emphasizes your category leadership. Test whether category-specific landing pages outperform your homepage for these queries.

Brand reputation and reviews

These intercept validation-phase users searching “[Brand] reviews,” “[Brand] ratings,” or “is [Brand] good” before they click through to third-party aggregators. Bid aggressively here — these comparison-shopping clicks are worth more than core brand searches. 

Use review extensions prominently, include specific social proof metrics in ad copy (4.8 stars, 10,000+ reviews), and send traffic to dedicated testimonial pages rather than your homepage. Test video testimonials on landing pages.

Competitive comparison defense

Control the narrative for queries like “[Brand] vs [Competitor],” “[Brand] alternative,” or “better than [Brand].” These are users you’re at risk of losing, so pay up to your maximum acceptable CPA. 

Create unique landing pages for each major competitor with honest comparisons that emphasize your advantages, include side-by-side feature tables, and offer special conversion incentives like extended trials or migration assistance.

Defensive tactics against third-party aggregators

Sites like G2, Capterra, and other affiliate comparison sites actively bid on your brand terms without violating trademark policy because they legitimately have content about your brand. 

But they’re siphoning off your traffic and often presenting biased or incomplete information. Your defense requires three coordinated approaches.

Bid aggressively on review keywords

Review aggregators bid heavily on “[Brand] reviews” and “[Brand] ratings” because these are their money keywords, so you need to bid even higher. 

Run the math: If a review aggregator click costs you $3 but sends that user to a page where your competitor’s ad costs $50, you’re getting a deal at $10 per click on your own review keywords. 

Calculate the lifetime value of a customer versus the cost of letting them click to a third-party site where competitors can advertise. Also, keep in mind it’s cheaper for you to bid on your own brand than for competitors to outbid you.

Claim and optimize your profiles on major review platforms you want to work with

Even if you can’t prevent them from bidding on your brand, ensure that when users click through, they see optimized content, strong ratings, and an active presence with responses to reviews. 

Many review platforms offer advertising options — test running ads on your own profile pages to capture users who arrive via organic search or competitor ads.

Build dedicated testimonial and customer story pages 

Make yours more compelling than third-party review aggregators. Include video testimonials, detailed case studies with metrics, filterable reviews by industry or use case, and verified customer badges. 

Then use your PPC ads to drive users to these owned properties instead of letting them discover review aggregators organically.

Dig deeper: When to use branded and competitor keywords in PPC

Get the newsletter search marketers rely on.


Ad copy strategies for brand protection

Your brand campaign ad copy needs to do more than confirm your brand name. It needs to preempt objections, differentiate from competitors, and provide compelling reasons to click your ad instead of a competitor’s or third-party site. Three frameworks deliver results.

The preemptive strike 

Identify the top 3-5 objections that come up in your sales process and address them directly in your ad copy before users encounter them on competitor or review sites. 

  • If implementation time is a concern, use “Live in 5 days, not 5 months.” 
  • If pricing is opaque, try “Transparent pricing, no hidden fees.” 
  • If enterprise readiness is questioned, lead with “Trusted by 500+ enterprise customers.” 
  • If ease of use is a concern, emphasize “No training required, start today.”

The competitive differentiator

Don’t just state features, state features your competitors don’t have or can’t match. This is especially critical for comparison queries where you know competitors are showing ads. Examples include: 

  • “Only platform with native [unique integration].” 
  • “Industry’s fastest performance, verified by [third party].” 
  • “Patent-pending [technology] competitors can’t replicate.” 

If you can’t identify any unique features or USPs, that’s a signal to improve your product positioning or capabilities. Without clear differentiation, PPC alone won’t drive sustainable conversions.

Social proof stacking

Combine multiple types of social proof to build credibility quickly. Don’t just pick one element, stack them. Try 

  • “4.8 stars from 10,000+ reviews. G2 leader 5 years running.” 
  • “Join 50,000+ companies. Featured in Forbes and TechCrunch.”
  • “Winner: Best [category] 2025. 98% customer satisfaction.”

Dig deeper: How to write paid search ads that outperform your competitors

Landing page strategy for brand campaigns

Sending all brand traffic to your homepage is a missed opportunity. Different branded queries represent different user intents and concerns, and your landing pages should address those specific intents.

Feature-specific pages

When users search “[Brand] + [feature],” send them to dedicated pages that explain the feature in detail, show it in action, and provide clear next steps. 

Include a hero section explaining the feature in one sentence, a video demo or animated screenshot, technical specifications for enterprise buyers, integration details if relevant, and customer examples using this specific feature.

Comparison pages 

Create dedicated comparison landing pages for each major competitor. Be honest about differences while emphasizing your advantages. Include side-by-side feature tables, pricing comparisons if advantageous, and customer testimonials from switchers. 

Acknowledge competitor strengths without being dismissive, highlight 3-5 key differentiators where you excel, and offer migration assistance or switch incentives. Make your CTA clear and prominent, offering a trial or demo.

Trust and validation pages

For review and reputation queries, create dedicated pages that aggregate social proof rather than linking to your G2 profile or hoping users browse scattered testimonials. 

Display aggregate ratings prominently (average of G2, Capterra, etc.), place video testimonials above the fold, show recent reviews with verified badges, make reviews filterable by industry, company size, and use case, include case studies with concrete metrics, and highlight third-party awards and recognition.

Monitoring and optimization: The ongoing battle

Brand protection isn’t a set-it-and-forget-it strategy. The competitive landscape constantly evolves, new competitors emerge, third-party sites adjust their strategies, and user search behavior shifts. You need systematic monitoring and rapid response capabilities across three time horizons.

Weekly monitoring 

Review:

  • Search term reports to identify new query patterns.
  • Auction insights for increased competitor presence.
  • Impression share metrics to diagnose declining performance.
  • Lost impression share breakdowns by budget and rank.
  • Manual searches of your top 10 brand queries to see what ads are showing.
  • Quality score checks for brand keywords to diagnose landing page or ad relevance issues.

Monthly deep dives 

  • Analyze conversion paths to understand how brand search fits into the broader customer journey.
  • Review assisted conversions since brand campaigns often contribute to non-brand conversions.
  • Audit landing pages for relevance and conversion performance. 
  • Gather competitive intelligence on what landing pages competitors use for brand conquesting.
  • Test new ad copy variations focused on emerging objections or competitive threats. 
  • Analyze search impression share by device and location to identify gaps.

Quarterly strategic reviews 

  • Audit your complete branded query coverage to identify missing categories or query types. 
  • Assess whether your coverage across the four query categories remains comprehensive.
  • Conduct competitive conquest analysis to determine which competitors most aggressively target your brand.
  • Evaluate ROI of different brand campaign types to optimize budget allocation.
  • Review third-party aggregator presence for new sites bidding on your brand.

Advanced tactics for sophisticated brand protection

Dynamic keyword insertion

For validation queries like “is [Brand] good” or “does [Brand] work,” use dynamic keyword insertion to echo the user’s specific question in your ad copy, creating higher relevance and click-through rates. Try headlines like “Yes, {KeyWord:[Brand]} Is Excellent” or “Absolutely, {KeyWord:[Brand]} Works.”

Geo-modified campaigns

If you have location-specific offerings or competitors vary by geography, create geo-modified brand campaigns. Users searching “[Brand] New York” or “[Brand] enterprise” may have different needs than general brand searchers.

Audience layering

Apply audience segments to brand campaigns to adjust bids based on user quality. Users who’ve visited your pricing page before should get higher bids on brand searches than first-time visitors. Similarly, prioritize users who match your ideal customer profile demographics.

Trademark enforcement

While Google generally allows competitors to bid on your brand terms, using your trademarked brand name in their ad copy is often prohibited. 

Monitor competitor ads and file trademark complaints when they use your brand name in headlines or descriptions. This is particularly effective against smaller competitors and affiliates who may not realize they’re violating policy.

Problem/solution queries

Capture queries where users are researching whether your solution addresses a specific problem. These are often high-intent and represent clear use case alignment. 

Target queries like: 

  • “[Brand] for [problem].” 
  • “How to [solve problem] with [Brand].” 
  • “[Brand] [use case] solution.”
  • “Can [Brand] help with [challenge].”

Budget allocation and ROI considerations

How much should you invest in brand protection versus acquisition campaigns? The answer depends on three factors: 

  • Competitive pressure.
  • Brand strength.
  • Customer lifetime value.

If you operate in a highly competitive category where multiple well-funded competitors actively bid on your brand terms, invest more in brand protection. Run auction insights weekly to monthly to quantify competitive presence. 

If competitors show in 40% or more of your brand auctions, this is a high-threat environment requiring aggressive defense. Stronger brands with dominant organic presence can afford to spend less on core brand defense because their organic listings provide natural protection. This doesn’t apply to reputation and comparison queries where third-party sites rank organically.

High LTV businesses should invest more aggressively in brand protection because the cost of losing a customer to a competitor or having them influenced by negative review sites is substantial. If your average customer is worth $50,000 over their lifetime, paying $50 per click to defend against comparison queries is economically rational.

For most B2B SaaS and high-consideration products, allocate approximately 15-25% of total paid search budget to comprehensive brand protection. Within that allocation, dedicate 40% to core brand defense (exact match), 25% to competitive comparison defense, 20% to reputation and review queries, and 15% to feature and niche question queries.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Brand protection as competitive moat

Brand protection through PPC isn’t just defensive marketing. It’s a competitive moat. When you control the narrative across branded search contexts, you ensure high-intent users see accurate information instead of competitor ads or third-party pages monetizing your brand equity.

The brands that win treat this as strategy, not maintenance. They segment branded queries by intent, build landing pages to match, monitor threats continuously, and defend high-value search real estate aggressively.

Start with an audit using the four-category framework. Close coverage gaps, align campaigns and landing pages to intent, and commit to weekly monitoring, monthly optimization, and quarterly strategic reviews.

If you don’t own your branded searches, someone else will.

How to revise your old content for AI search optimization

2 March 2026 at 19:00
How to revise your old content for AI search optimization

If your brand’s content arm has been active for a few years, I’m guessing you have plenty of material that can be revised to help you show up more prominently in AI search answers — we’ll call this AEO throughout the article.

I’m getting bombarded with brand marketers’ questions about how to get AEO traction these days. “Revise your old content” is a favorite answer that often produces an “aha” moment for the other party, possibly because the nature of AEO is so forward-looking.

That answer sparks a few important follow-up questions I’ll tackle below.

How do you reformat content for better AEO performance?

I like to lean on three principles when I tackle content reformatting. Optimizing for:

  • Topical breadth and depth.
  • Chunk-level retrieval.
  • Answer synthesis.

Here’s what that means in practice.

Optimize for topical breadth and depth

Structure your site using a hub-and-spoke model. For each primary category or keyword theme, build a comprehensive hub page that introduces the broader topic and links out to supporting spoke pages that dive deeper into specific facets.

Each spoke page should focus on one clear angle and develop it thoroughly enough to establish distinct purpose and query intent. Because user questions branch in different directions, covering multiple angles helps expand your overall topical reach.

Link related spoke pages to one another where it makes sense, and consistently back to the hub as the central reference point. This reinforces how your content connects and gives AI systems clearer signals about the relationships between topics.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Optimize for chunk-level retrieval

Don’t rely on using the whole page for context. Each chunk should be independently understandable.

Keep passages semantically tight and self-contained. Use one idea per section and keep each passage tightly focused on a single concept — as Our Family Wizard did here:

Optimize for answer synthesis

Summarize complex ideas clearly, then expand with a clearly structured “Summary” or “Key takeaways”. Start answers with a direct, concise sentence. Favor a plain, factual, non-promotional tone.

This formatting, from Baseten, puts an easily digested TL;DR right at the top of a post explaining AI inference:

Baseten - TLDR

Dig deeper: How to keep your content fresh in the age of AI

How will humans react to that formatting?

Start with the premise that AI readability is about clarity, not gimmicks, and this approach has tons of appeal to humans looking to quickly understand the content they’re consuming.

AI systems favor content where:

  • Answers are named, not inferred.
  • Sections have clear intent.
  • Key points are easy to lift without rewriting.

That often means being more explicit than traditional SEO ever required — defining terms directly, summarizing sections, and stating conclusions early. It’s kind of the opposite of keyword-stuffed content that’s overwritten to hit assumed “preferences” the Google algorithm might have for content length.

The only real hesitation I have is that content generated by AI may oversimplify nuance. Not every page should be optimized for a single atomic answer, and strategic or opinionated content still benefits from narrative flow.

I try to strike a balance by:

  • Explaining first, then elaborating.
  • Labeling insights, then proving them.
  • Making the answer obvious before adding sophistication.

When done well, this has appeal for both AI and humans.

Now, all of that said, LLM-produced content — just check out your LinkedIn feed if you need examples — very quickly became recognizable as exactly what it is: AI-produced content that’s easily consumed by AI models.

The effect can be very off-putting depending on the reader, even if your content, as it should always strive to do, includes original POVs, research, and or data that the LLMs couldn’t possibly find from existing content.

Keep a close eye out for AI tells, the dreaded em dash, squished vertical line spacing, a bullet-point list featuring emojis, sentence structures like “It’s not just [X]. It’s also [Y].” or “It’s more than [A]. It’s [B].” and removing them wherever you see them. 

Dig deeper: Refreshing content: How to update old content to drive new traffic

Get the newsletter search marketers rely on.


How do you prioritize which content to revise?

For AEO, prioritization is less about traffic, which is where a lot of SEO marketers stop KPI-wise, and more about answer value.

I start by identifying content that:

  • Contains clear expertise or proprietary insight, which LLMs love.
  • Answers questions people ask repeatedly but doesn’t state the answer cleanly.
  • Is already referenced internally by sales, support, or customers as “explainer” material.

Also worth noting: Is the content focusing on one of our core products or services, even indirectly? That’s fundamental. Visibility for visibility’s sake isn’t worth much, so make sure it’s got a natural tie-in to pipeline or revenue growth.

As far as types of content to prioritize, reports, tools, and evergreen guides tend to rise to the top because they already contain structured thinking, if not structured answers. AI systems don’t reward originality embedded in prose. They reward explicit conclusions, definitions, and frameworks.

Here’s my simple AEO prioritization test:

  • Can an AI model confidently quote or summarize this page as is?
  • Would it know what question this page answers within the first few seconds?
  • Are the key takeaways labeled or implied?

If the answers are “no,” and the theme of the content is important to your business growth, that content is a strong reformatting candidate.

Dig deeper: How to use AI to refresh old blog content

How do you approach metadata when revising content for AEO?

Before I dive into the how, I’ll mention that these elements have a different function for AEO than they do for SEO. In SEO, they function as ranking levers. In AEO, they serve more as context anchors.

Let’s break down each key element of metadata and show how that difference should play out.

Title tags

Title tags serve as the topic of the page for traditional SEO. For AEO, make them more descriptive about the page’s primary answer or function.

So a title tag that reads “Session replay software” for SEO purposes could be rewritten for AEO to say “Session replay: what it is, when to use it, and when not to use it.” Title tags with more context give AI systems clearer signals about how and when to cite the content.

Headings (H1-H3)

In traditional SEO, header tags have been used to identify categories, for example, “compliance monitoring.”

In AEO, I use them to map to specific questions or claims. Possible updated versions of the above would be:

  • What is compliance monitoring?
  • Why does compliance monitoring matter for companies in {x} vertical?
  • Common issues caused by a lack of compliance monitoring
  • When should a CTO invest in compliance monitoring?

To stress-test your header tags, try answering them. If it takes you more than a few sentences to answer your question or prove your assertion clearly and persuasively, it’s probably the wrong question and not one a user is going to type into ChatGPT.

Meta descriptions

Meta descriptions are those chunks of expanded text that might or might not be pulled into the SERP in traditional SEO, but do serve to explain more about the content. In AEO, they act as a compressed intent signal. AI systems, like the SERPs, may choose not to quote the meta description, but good ones help reinforce:

  • Who the content is for.
  • What problem it resolves.
  • How it should be framed.

Through the AEO lens, I look at meta descriptions as a one-sentence briefing note for both users and LLMs.

Dig deeper: Meta tags for SEO: What you need to know

What changes — and what doesn’t — in the shift to AEO

You may have noticed a theme here — while, in general, what’s good for SEO is what’s good for AEO, there are material differences in the two disciplines. Knowing what they are and how to adapt accordingly can pay off in AI search visibility.

I’m not arguing that your content strategy or themes should pivot. But knowing that AI models read and ingest content differently than more traditional SEO algorithms is important and should be factored into the way you’re repurposing your evergreen work from months and years past.

Google publishes Universal Commerce Protocol help page

2 March 2026 at 18:51
How Google works: Experiments, entities, and the AI layer beneath search

Google published a new help page detailing how its Universal Commerce Protocol (UCP) works — offering merchants clearer guidance on how checkout flows operate across Google properties.

What’s happening. The documentation explains how UCP and its UCP-powered checkout enable a native “Buy” button that moves the transaction directly onto Google surfaces, while merchants remain the seller of record. To activate the feature, merchants must implement the native_commerce attribute in Merchant Center.

Payments run through stored Google Wallet credentials, and processors must support Google Pay tokens.

Why we care. UCP was first introduced as part of Google’s agentic shopping push and later confirmed as live in Merchant Center. UCP moves checkout directly onto Google surfaces, reducing friction between product discovery and purchase. That could improve conversion rates, especially in AI-driven experiences like Gemini and AI Mode.

The new documentation also clarifies implementation requirements, helping merchants prepare their feeds and payment systems to participate in Google’s evolving, agent-powered commerce ecosystem.

The bigger picture. By centralizing checkout while keeping merchants as the seller of record, Google is reducing friction in AI-assisted shopping and tightening its control over the transaction layer.

The bottom line. With formal documentation now live, UCP moves from concept to operational playbook — signaling that AI-driven, on-Google checkout is becoming a core part of Google’s commerce strategy.

First seen. The help document was spotted by PPC News Feed founder Hana Kobzova.

Dig deeper.  About the Universal Commerce Protocol (UCP) and UCP-powered checkout feature on Google.

How Google’s Universal Commerce Protocol changes ecommerce SEO

2 March 2026 at 18:00
Google’s Universal Commerce Protocol changes ecommerce SEO

For years, ecommerce ran on a simple model: Google drove traffic, and your site did the selling. Rankings, clicks, and conversion rate determined performance. That model just changed.

With the Universal Commerce Protocol (UCP) and AI Mode, Google can now discover, compare, and complete purchases inside its own AI experiences. Search is shifting from a traffic channel to a transaction layer. Visibility now depends on whether Google’s AI selects your product data.

When AI makes the recommendation and closes the sale, optimization moves upstream. The question isn’t just whether you rank. It’s whether you’re chosen.

Here’s what changed and what SEO and AI optimization teams need to do next.

The shift to agentic commerce

Google launched the Universal Commerce Protocol, or UCP, on Jan. 11. This new open standard is designed to let AI agents discover, evaluate, recommend, and purchase products across the web, all inside Google’s own AI experiences.

What stood out to me wasn’t just the protocol itself, but the ecosystem Google built around it. UCP was developed with platforms like Shopify, Etsy, Wayfair, Target, and Walmart, with payment networks already integrated. That kind of coordination suggests this was planned for the long haul, not just a quick test.

UCP integrations

At the same time, Google rolled out three platform-level capabilities that make this real in day-to-day shopping:

  • Business Agent gives brands an AI-powered representative inside Search and the Gemini app. Shoppers can ask product questions, compare options, and get brand-level guidance without visiting a website.
  • Direct Offers allow merchants to inject exclusive discounts directly into Google’s AI Mode, so promotions now live inside the recommendation engine itself.
  • Checkout in AI Mode lets Google complete purchases inside its own interface, turning Google from a traffic broker into a transaction layer.

More importantly, this allows Google to turn everyday conversation into commerce. Instead of waiting for shoppers to type product searches, Gemini can now respond to natural language prompts like “help me plan a camping trip” or “what will get wine out of my couch” by pulling live inventory, pricing, and availability from retailers, and completing the purchase in the same interaction.

Dig deeper: Are we ready for the agentic web?

What this means for ecommerce strategy

When AI intermediates the buying journey, brands compete inside the recommendation layer, not just in search results.

For most of my career, ecommerce worked the same way everywhere — search engines, ads, and marketplaces existed to send people to your site. Your site did the selling. UCP changes that model entirely.

Now AI handles the whole journey. It figures out what someone actually needs, compares the options, and can even complete the purchase. At that point, it doesn’t really matter how good your homepage or category page is if AI never chooses your product in the first place.

I saw this problem years ago, working with a large American candle retailer. People weren’t really shopping for candles. They were trying to get rid of pet smells, calm down after a long day, or make their house feel a certain way. But all we could give Google were scent names and product categories. 

If someone wanted something that killed pet odor without smelling like fake fruit, we probably had the perfect candle, but it almost never got shown because the data couldn’t express that situation.

Candle traditional attributes and AI-driven use cases

That’s what changes here. With Gemini and UCP, people can finally describe what they’re dealing with, and the AI can map that to the right products in a brand’s catalog.

And when checkout happens inside Google, everything shifts. You don’t win because someone clicked your site. You win because the AI picked your product. Business Agent pushes that even further by letting brands show up right in the middle of that decision.

In real terms, that can be the difference between moving a few thousand units and moving 10 times that, without changing a single product, just because the right things are finally being matched to the right people.

This creates a very different kind of competition than what we’re used to. In the past, weak data or mediocre pages might push you lower in the results. Now, when product data is incomplete or inconsistent, the AI has little reason to consider you at all.

Brands are competing for inclusion in the system’s recommendation set. That shift changes where the storefront lives. It now exists wherever the AI presents options in that moment.

Dig deeper: Google outlines AI-powered, agent-driven future for shopping and ads in 2026

Find a light-weight suitcase for an upcoming trip

Get the newsletter search marketers rely on.


The new playbook: How SEO and AI optimization help

For a long time, SEO was framed as getting pages to line up with keywords. In reality, search engines have always been trying to understand products well enough to make decisions on a user’s behalf. What’s changing now is how explicit that decision-making has become.

Google is feeding AI Mode, Gemini, and Business Agent with product feeds and structured data, and it keeps adding more fields that describe how products actually get used. Things like common questions, what works with what, and what people buy instead when something is out of stock. That’s how the AI starts to reason, not just match words.

I saw this clearly while working with an outdoor apparel brand. Someone planning a trip to Europe wasn’t really searching for a jacket. They were thinking about rain, cold mornings, long walks, and changing weather. We had the right products, but shoppers had to guess which filters to click or which category to start in to find them.

With agentic commerce, that guesswork goes away. A shopper can just say, “I’m going to Europe in the spring, what jacket should I bring?” and Gemini can look at weather resistance, weight, breathability, and what’s actually in stock, then show the few options that make sense.

That’s what all these new attributes unlock. They let the AI understand products the way a good salesperson would. And when that happens, it’s not a small optimization. It can be the difference between a campaign barely working and one that suddenly takes off.

Dig deeper: How AI-driven shopping discovery changes product page optimization

Competing in the AI selection layer

What matters isn’t page position. It’s whether Google’s AI understands what a product is, who it’s for, and when it should be recommended.

When I worked with a high-end luxury jewelry retailer, one of our biggest challenges was building “user journey” pages. We had to create landing pages for things like anniversary gifts, modern gold, or minimalist style because shoppers weren’t searching for SKUs. They were searching for meaning: 

  • “I need something that feels romantic.” 
  • “This is for someone who loves simple gold instead of flashy diamonds.” 
  • “I want it to look modern, not old-fashioned.”

Those pages worked, but they were slow to build, hard to keep fresh, and almost impossible to personalize.

Jewelry store example

With Gemini and UCP, that whole layer moves into AI. A shopper can just describe the person, the style, and the budget, and the system puts together the right products in real time. That feels less like search and more like having a personal shopper.

And none of that works without good product content. The descriptions, the specs, the reviews, even how people interact with the site are what give the AI something to reason with.

If your pages are thin or confusing, the AI has nothing solid to work from. For SEOs, this is the moment the fundamentals become decisive again.

The same goes for site experience. When people stick around, buy, and don’t return products, Google learns that your brand is a safe bet. In an AI-driven world, that trust is what keeps you in the recommendations.

Direct Offers then layer paid promotion on top of this organic selection system, creating a blended performance layer where feed quality, content quality, and media strategy all work together inside the AI buying experience.

Dig deeper: How SEO leaders can explain agentic AI to ecommerce executives

Using Google Merchant Center for agentic commerce

Product feed optimization essentials

Merchant Center has evolved beyond a Shopping ad upload tool. It now connects your entire retail operation to Google’s AI. Inventory, pricing, promotions, shipping, and product details all flow through it so Gemini can actually act on them. If that data is wrong or out of sync, the AI can’t confidently sell anything.

That’s why every field suddenly matters. Titles, descriptions, categories, GTINs, brand names, and images aren’t just metadata anymore. They’re how the AI knows what something is and whether it should trust it.

Google is also starting to add more human context into those feeds. Things like common questions, what accessories go with a product, what people buy instead, and how something is used in the real world. That’s how a machine starts to understand products the way a person does.

This is where a lot of brands get blindsided. A small pricing error, a feed that lags behind inventory, or a missing promotion is all it takes for products to quietly fall out of the AI layer. You might get an alert in Merchant Center, but if no one’s watching closely, the impact shows up in lost visibility long before anyone realizes what happened.

If you’re eligible, turning on Business Agent is part of this too. It lets your brand show up inside those AI conversations, not just as a product listing, but as something that can answer questions and close the sale.

And it’s not just the feed. Google is constantly comparing what you tell it in Merchant Center with what it sees on your site. When those two don’t line up, trust drops, and so does visibility.

Google - Product data optimizations

Product feed optimization essentials checklist

  • Complete all available attributes
    • Title, description, product type, Google product category.
    • GTINs, MPNs, brand identifiers.
    • Images – multiple angles, lifestyle shots.
  • New conversational commerce attributes (coming soon)
    • Answers to common product questions.
    • Compatible accessories.
    • Product substitutes.
    • Use cases and scenarios.
  • Feed quality signals
    • Price accuracy and competitiveness.
    • Availability and inventory status.
    • Shipping and return information.
    • Promotion data for Direct Offers eligibility.
  • Business Agent activation
    • Eligible U.S. retailers can activate in Merchant Center.
    • Customize the AI agent’s voice to match brand.
    • Train the agent on product data – coming feature.
    • Enable direct purchases within the chat experience.
  • Structured data alignment
    • Ensure website schema markup matches Merchant Center data.
    • Product schema, offer schema, and review schema all contribute to AI understanding.

Connecting Google Search Console to Merchant Center

This connection matters more than most people realize. Search Console used to tell you how pages were doing. Merchant Center tells you how products are doing. In an AI-driven world, those two things are finally tied together.

Linking them turns this from guesswork into something you can actually manage. You can see which products are getting picked up by Google, which ones are getting ignored, and where bad data is quietly killing your visibility. Disapproved items, missing attributes, price mismatches – all of that shows up right where you can act on it.

It also lets you watch how demand is shifting. You can see when impressions move from traditional search into Shopping and AI results, and which products are benefiting. That’s how you know whether your catalog is really working inside the AI layer or just sitting there hoping someone clicks a link.

What to monitor

Once everything is connected, this becomes your early warning system. It’s the dashboard you end up living in. It tells you which products are broken, which ones are being ignored, and which ones are quietly driving everything.

Performance reports show which items are actually getting seen and clicked, whether that’s in Shopping results or inside AI experiences. And the alerts are what save you from surprises. Price mismatches, crawl issues, or policy problems can quietly pull products out of the AI layer without you ever noticing unless you’re watching.

In an AI-driven commerce world, those small data issues don’t just hurt performance. They decide whether your products show up at all.

  • Product feed diagnostics
    • Disapproved products and reasons.
    • Missing required attributes.
    • Data quality warnings.
  • Performance insights
    • Click-through rates on product listings.
    • Impressions in Shopping results vs. organic.
    • Conversion tracking across channels.
  • Issue alerts
    • Broken feeds or crawl errors.
    • Price and availability mismatches between site and feed.
    • Policy violations that limit visibility.
  • Action items
    • Set up automatic alerts for feed issues.
    • Regular audits of product data completeness.
    • Monitor new conversational commerce metrics as Google rolls them out.

The future of ecommerce visibility

This shift to agentic commerce is already happening. AI is now deciding what to show and what to recommend.

I’ve seen brands struggle not because their products were wrong, but because the right products never reached the right people. That’s always been the gap in search. People know what they need. The system just doesn’t always connect the dots.

Agentic commerce starts to close that gap. When AI understands both the shopper and the catalog, it can finally match real needs to real products instead of forcing people to guess the right keywords.

That’s what Google has built here. Search has become a system for turning intent into answers.

So the work is clear. Keep your product data clean. Connect Search Console and Merchant Center. And start thinking about how people actually describe their problems, not just how they type queries.

How to increase Google Discover traffic with technical fixes

2 March 2026 at 17:00
How to optimize for Google Discover in 2026

Google Discover caught my attention in 2021, when it was driving millions of clicks a month to publishers. I underestimated how pervasive it would become. 

My feed cycles through soccer, television, Baltimore news, SEO, and world events — a reminder that Discover understands users at an almost uncomfortable level.

It’s not limited to one app. Discover appears in Chrome new tabs, the Google app, Android home screens, Google.com on most mobile browsers, and other Google surfaces.

If Google Discover is everywhere, it’s our job as SEOs to capitalize on this opportunity. Let me show you how. 

Essential considerations before we begin optimizing for Discover

Discover traffic isn’t a viable source for all brands, just as search isn’t for all of them.

Discover favors timely content

Content that performs well in Discover is almost always highly time-relevant and from authoritative sources, generally major publishers. It would be unusual to see evergreen content in Discover.

Because of this, sites I’ve worked with that get the most traffic from Discover often get less traffic from traditional search than they do from Discover.

Discover traffic is declining

Many publishers are finding that Discover traffic is declining, as the Discover feed now includes a large volume of social posts and AI summaries of major stories from multiple sources. This displaces the articles that used to make up the feed.

Before this change, writing articles about viral social media posts was a very effective strategy for driving millions of monthly clicks. This may be why Google is beta testing the ability to track traffic to social platforms.

Good, relevant content still matters

No matter how technically optimized a website is, content that’s good and relevant to users will outperform content that isn’t, even when their interests are constantly changing.

If your content doesn’t get traffic in Discover, consider whether it’s the kind of content Discover aims to surface. Likewise, if you experience a sudden drop in Discover traffic, review the content before exploring technical causes.

Don’t let any of this deter you from optimizing for Discover. These optimizations won’t hurt traditional search, and you may end up getting Discover traffic you didn’t expect — I’ve often seen non-publishers experience brief spikes in Discover. Most of these suggestions are minor template-level changes that should be low effort.

Dig deeper. How Google Discover qualifies, ranks, and filters content: Research

Get the newsletter search marketers rely on.


Technical optimizations for Discover

The three main things I look at first when auditing new clients are:

  • Discover publisher profile.
  • Images in articles.
  • Publisher and author signals.

This is where your optimizations start.

Discover publisher profile

Check your Discover publisher profile to ensure your website and social profiles are linked. You’ll need a tool to find your publisher profile page. I use Damian Tsuabaso’s, which is in Spanish but still straightforward. Insert your brand’s name, URL, or entity ID, then search.

ESPN - Discover publisher profile

Interestingly, Discover profile pages are linked directly to your entity’s Knowledge Graph ID. The URL string in the profile page is a tokenized version of the KGMID (not in all cases). I expand on this further in this LinkedIn post.

When reviewing your publisher profile page, focus on two main questions:

  • Does it reflect you as a publisher? New brands, or brands that have been acquired or rebranded, may have unclear publisher profiles. Fixing this requires clarifying your brand’s entity and making Knowledge Graph optimizations.
  • Are your brand’s social media accounts appearing on the page? Publisher pages can aggregate social media posts across platforms, and those posts are increasingly occupying real estate. Getting social profiles added may take time because there’s no dashboard for managing Discover profile pages.

To help link social accounts to your brand:

  • Ensure your Organization schema includes sameAs elements that list your social accounts.
  • Link to those accounts in your website footer.
  • Link to your website from your social accounts.

Images

Google’s documentation emphasizes that using images, especially large images, is important for visibility in Discover. It also recommends using the max-image-preview:large tag to display the best-resolution image as the article card preview.

I generally check the following:

  • Confirm there’s a max-image-preview:large tag. This may seem minor, but many CMSs don’t include it in article templates by default, and I routinely see it missing.
  • Ensure displayed images, especially the hero image at the top of the article, have a minimum width of 1,200 pixels. The rendered size will vary by browser, but the image file itself should be at least 1,200 pixels wide.
  • Review the configuration of your Open Graph image tags. They’re usually the preview image used in Discover. This image should match the hero image and be 1,200 pixels wide. I frequently see the Open Graph image set to a logo, which Google has discouraged. While that specific line was recently removed from the documentation, I’d still avoid using a logo.

The Open Graph Protocol also allows you to define image dimensions. When feasible, use those properties and ensure they accurately reflect the image’s true dimensions.

Publisher and author transparency

You’ve probably already considered E-E-A-T best practices, but implementing them supports overall content SEO performance.

For author transparency, I check that:

  • The article’s author is clearly defined on each article, including an image, byline, link to a bio page, and social links.
  • The listed authors are the actual contributors, not a generic company-wide byline or an uninvolved executive.
  • Author bio pages include a meaningful bio, credentials, links to social accounts, and links to other articles published on your site.
  • Relevant schema.org structured data related to the author is included on both the article and the bio page.

For publisher transparency, confirm that you:

  • Have an About Us page linked in your footer or main navigation.
  • Use Organization schema on your home or About page.
  • Have created robust terms of use and editorial policy pages related to your organization, and they’re linked in the footer. 

Discover is just the beginning

Discover is driven by relevance, timeliness, and authority, not checklists.

Technical optimizations won’t make content succeed in Discover if it doesn’t belong there.

The optimizations outlined above are essential for visibility, but the largest Discover opportunities are typically uncovered through broader content audits

See how leaders bridge the engagement divide by attending ‘Engage with SAP Online’ by SAP Engagement Cloud

2 March 2026 at 16:00

Here’s a question every marketing leader should be asking right now: How healthy are your customer relationships? Not your campaigns, not your channels but the actual relationships. 

It’s a harder question than it sounds. Most organizations have spent the last two decades building around channels.  

Email had a team. Social had a team. In-store, ecommerce, service, each with their own stack, their own metrics, their own version of success. And from the inside, it looked like progress. Every team was hitting their numbers. 

But from the customer’s perspective it felt like dealing with multiple companies wearing the same logo. Marketing sends a “We miss you!” email the day after a frustrating support call. Sales doesn’t know the customer has already watched a demo. In-store purchase history is invisible to the ecommerce team. No continuity. No memory. No relationship. 

On March 11, 2026, some of the sharpest minds in marketing, CX, and customer engagement are coming together to tackle exactly that. Engage with SAP Online is a free, half-day virtual event built for leaders who are done optimizing channels in isolation and ready to rethink how their organizations build and sustain customer relationships. 

Who’s speaking and why it matters 

The event opens with Sara Richter, CMO of SAP Engagement Cloud, sharing new findings from the SAP Engagement Index, a global study of 10,000 consumers and 4,800 senior decision-makers. But the real draw is the lineup that follows. 

Mark Ritson, professor, founder of MiniMBA, and arguably the most no-nonsense voice in marketing today, delivers the keynote: “Trends Shaping Customer Experience: What’s Real, What’s Not, and What Matters Most Now.” 

If you’ve followed Ritson’s work, you know what to expect: zero hype, sharp diagnosis and a clear-eyed take on how customer behavior is shifting faster than most brands realize. He’ll unpack why loyalty can no longer live in marketing alone and what leaders need to do about it. 

From there, two more sessions bring the theory to life: 

  • Jutta Richter (head of 1:1 campaign management, BMW Group) tackles the question of influence in modern customer journeys, and how brands can show up with relevance when customers are already halfway to a decision. 
  • Daniele Tedesco (ecommerce global process owner, Essity) and Venky Naravulu (director of partner solutions, Sinch) join Ritson to share real-world lessons on modernizing engagement through AI and connected systems. 

Across all sessions, the focus is on what’s working, what isn’t, and what to do about it. 

As Ritson himself put it in contributing to the Engagement Index: “Engagement isn’t something one department can fix. Every team shapes the brand, and the real progress comes when they work from the same understanding of the customer.” 

The backdrop: Why this conversation is urgent 

This event isn’t happening in a vacuum. Preview findings from the SAP Engagement Index, which will be unveiled in full at the event, point to a growing disconnect between what customers expect and what most organizations can deliver. 

Among the headlines: 

  • 75% of consumers say they’re put off by disorganized brands that pass them between multiple people or teams to resolve a single issue.
  • Yet 77% of brands claim their engagement strategies already deliver seamless experiences. 

SAP calls this the Engagement Divide: the distance between what customers need in the moments that matter and what most organizations can actually deliver. And based on the research, for most businesses, it’s growing. 

The channel mismatch alone tells a story. Customers have moved, but too many brands haven’t followed: 

  • 41% of consumers prefer to shop via mobile apps, yet only 28% of brands engage there. 
  • 43% of consumers prefer online shopping, yet only 26% of brands engage via web and e-commerce. 

And when SAP assessed how well organizations align people, processes and technology around engagement, just 21% scored at a high maturity level. The vast majority, 63%, sit in the middle: able to deliver basic personalization, but struggling with the coordination across marketing, sales, service, and commerce that consistent experiences demand. 

It’s a crowded middle tier, and breaking out of it requires more than better campaigns. It requires a fundamentally different operating model. 

From channels to relationships 

The conditions driving this divide have been building for years. Customer acquisition costs have climbed steeply across sectors. Third-party tracking is eroding. When it costs that much to win a customer, you can’t afford to lose them at a weak handoff between marketing and fulfillment, or between purchase and support. 

And consumers themselves have changed. With AI at their fingertips, they compare, switch and decide in seconds. They form opinions long before a brand’s message lands in their inbox. The micro-moments that used to belong to marketers now belong to customers, and those moments increasingly determine whether a brand wins or loses a relationship. 

At the same time, the technology to fix this has finally matured. Customer data platforms work. AI has moved from experiment to operational tool. Real-time processing is no longer enterprise-only. The capability exists. The question is whether organizations can reorganize to use it. 

At SAP, they’re calling this shift the Engagement Era: a move from organizing around channels and departments to organizing around the customer relationship as a whole. A world where engagement isn’t episodic but continuous, where loyalty is an outcome of connected experiences, and where every function that touches the customer journey is visible and coordinated. 

The research shows that intent is already there: 

  • 77% of businesses plan to invest in AI-powered engagement this year 
  • 76% are investing in omnichannel technologies 

The challenge is execution: moving from channel-centric optimization to relationship-centric orchestration. That means unified customer profiles visible across every department. It means journey-level visibility, not just campaign-level reporting. It means measuring success at the relationship level, lifetime value, retention, advocacy, not just opens and clicks. 

The speakers and practitioners at Engage with SAP Online on March 11 are the ones building the playbook. If you’re ready to see what that looks like in practice, this is a half-day well spent. 

Engage with SAP Online 

Date: March 11, 2026 

Time: 9:00 AM ET | 1:00 PM GMT | 2:00 PM CET 

Format: Free, virtual, half-day event Register now!

Microsoft Ads launches self-serve negative keyword lists

27 February 2026 at 23:16
Microsoft Ads

Self-serve negative keyword lists are now live in Microsoft Advertising, according to Ads Liaison Navah Hopkins — giving advertisers long-requested control without submitting support tickets.

What’s happening. Advertisers can now create and manage shared negative keyword lists directly in the UI. Lists support up to 5,000 negative keywords (one per line) and can be applied at either the campaign or account level. Match types function the same way in Performance Max as they do in traditional Search campaigns.

  • Lists can also be edited, exported as CSV files, or removed from campaigns as needed.
  • Microsoft notes that match type formatting requires brackets for exact match and quotation marks for phrase match — not hyphens.

Why we care. Negative keywords are critical for filtering irrelevant traffic and protecting budgets. Making lists self-serve streamlines workflow, reduces reliance on support tickets, and gives advertisers faster control over search query exclusions.

The bottom line. Microsoft is handing more operational control back to advertisers — and eliminating friction in one of the most essential levers for campaign efficiency.

Dig deeper. How to add keywords that won’t trigger my ads (negative keywords)

Google publishes new Google Ads passkey help doc

27 February 2026 at 22:48
How to tell if Google Ads automation helps or hurts your campaigns

Google published a new help document outlining how passkeys work in Google Ads — a timely move as advertisers face a rise in account hacks and phishing attempts.

What’s happening. The new help page explains how passkeys function as a passwordless, phishing-resistant login method in Google Ads, and clarifies when they’re required — including for sensitive actions like user access changes and account linking updates.

The documentation walks advertisers through device requirements, setup steps and security considerations.

Why we care. Ad accounts are increasingly being targeted by attackers, with compromised logins leading to budget theft, campaign disruption and data loss. Clearer guidance from Google gives advertisers a straightforward path to strengthening account defenses at a critical moment.

The bottom line. As account takeovers become more common, better education around security tools like passkeys is a practical win for advertisers looking to lock down access and reduce risk.

Dig deeper. About Google Ads account passkey

Google patent hints it could replace your landing pages with AI versions

27 February 2026 at 20:44

A Google patent suggests Search may take you from the results page to a super-personalized AI-generated page that answers your query instead of sending you to a website.

Patent. The patent, AI-generated content page tailored to a specific user, was filed by Google about a year ago and granted last month.

This patent describes a system that uses AI to automatically create a custom landing page when you perform a search. Instead of sending you to a generic homepage, it dynamically generates a page tailored to your intent and the organization’s content.

Patent abstract. Here is a copy of the abstract of the patent:

“Techniques for generating an artificial intelligence (AI)-generated page for a first organization. The system can include a machine-learned model configured to generate the AI-generated page. The system can receive from a user device associated with a user account, the user query. Additionally, the system can generate a search result page for the user query. The search result page can include a first result associated with a first landing page of the first organization. The system can calculate a landing page score for the first landing page. The system can generate an updated search result page based on the landing page score exceeding a threshold value, the updated search result page having a navigation link to an AI-generated page for the first organization. The system can cause a presentation, on a display of the user device, the updated search result page.”

Example. Here’s a fictitious example: You search for “waterproof hiking boots for wide feet” on a large retailer like REI or Amazon. Normally, clicking a result takes you to a generic Hiking Boots page, and you have to filter it yourself. Instead, Google could use AI to generate a new page that delivers a more customized, pre-filtered result.

Credits. This was spotted by Brandon Lazovic and posted by Joshua Squires on LinkedIn. Squires wrote:

  • “In short, Google would use AI to generate a page that looks like your website but rebuilds the entire structure of a page dynamically, in real time, and places it at the top of the SERP. This throws up all kinds of red flags to me.”

Glenn Gabe wrote:

  • “If you thought AIOs angered people, just wait for AI-generated landing pages from Google. Yes, Google could create new landing pages from the SERPs if yours isn’t good enough (based on this patent).”

And Lily Ray added that this is “Terrifying to be honest.”

Why we care. This is just a patent and doesn’t mean Google is doing this now or will in the future. Some may see it as similar to AI Overviews or AI Mode. Either way, it’s worth reading if you want insight into how Google is thinking.

OpenAI: ChatGPT now has 900 million weekly active users

27 February 2026 at 20:31
ChatGPT growth

ChatGPT now has more than 900 million weekly active users, OpenAI announced. This is the first time OpenAI has publicly cited the 900 million weekly active user mark.

Why we care. User behavior continues to fragment beyond traditional search. If 900 million people use ChatGPT weekly, discovery, research, and product comparisons are increasingly happening within AI interfaces. That said, many of those actions tend to lead users to traditional search for confirmation.

The details. OpenAI shared the figure of 900 million weekly active users while announcing a new $110 billion funding round. The company also reported more than 50 million consumer subscribers and over 9 million paying business users.

What it means. ChatGPT is a place where you compete for queries, commercial intent, and brand visibility. While not all behavior here is “search” in the strict sense, you need to understand how content is surfaced, cited, or summarized in AI-generated answers — and how that impacts conversions.

OpenAI’s announcement. Scaling AI for everyone

You can now build PPC tools in minutes with vibe coding

27 February 2026 at 20:00

You can now generate custom PPC tools in plain English. With GPT-5 enabling complete program generation, the competitive edge belongs to those who master AI-assisted automation.

Frederick Vallaeys is building tools in minutes, not days or months, with AI. Vallaeys spent 10 years at Google building tools like Google Ads Editor, then another 10 building tools at Optmyzr, where he’s CEO.

He’s watched automation evolve firsthand, and vibe coding is the next leap. At SMX Next 2025, he shared his journey with vibe coding.

The traditional script problem

If you work in PPC, automation has always been top of mind. In the early days, you relied on Google Ads scripts. Scripts are great because there’s always more work than fits in a day.

But here’s the problem: when Vallaeys asks who actually writes their own scripts, only three to five out of 100 raise their hands. Most people copy and paste scripts because they don’t know how to code.

This works, but it’s limiting. You’re stuck with what someone else built instead of implementing your own secret sauce.

GPT changes the game

A couple of years ago, GPT made it easy to write scripts without knowing how to code.

The best part? Large language models are multimodal. You can take a whiteboard flowchart of your campaign decision tree, give the image to AI, and it’ll write the full Google Ads script.

Vallaeys suggests rethinking meetings. Instead of seeing client meetings as more work, treat them as prompt-engineering sessions.

It’s easy to get frustrated when clients add more to your plate. But with a mindset shift, the meeting becomes the prompt that tells AI what to execute.

What is vibe coding?

Instead of writing lines of code, you describe what you want the software to do, and the AI handles the technical implementation. That’s vibe coding.

Imagine your team needs software that does X, Y, and Z. Write down what it needs to do, give it to a coding tool, and it builds the software. As Vallaeys says, it’s mind-blowing.

Scripts are old news. Vibe coding is the new frontier.

A live example: Building a persona scorer

Vallaeys showed how fast this works. He went to Lovable and said, “Build me a persona scorer for an ad that shows how well it resonates with five different audiences.”

In less than 20 seconds, the AI responded with its design vision, features, and approach. It explained exactly what it would build, so he could immediately say, “Actually, make it 10 audiences instead of five.”

You work with it like a human developer — without touching code. You just describe what you want changed.

The framework: What should you automate?

Traditionally, you automated two types of work: quick, frequent tasks (like reviewing search terms) and long, infrequent tasks (like monthly reporting with analysis).

Vallaeys advises you not to limit automation to what you already do. Think about what you wish you could do more often but haven’t because it’s too time-consuming. That’s prime automation territory.

The old way vs. The new way

The old process was painful. Launching something took at least a month.

You’d spend days writing specs. Engineers would spend days building. You’d find bugs, coordinate meetings, and repeat.

The other problem? Traditional code was deterministic — pure if/then logic. Great for reliability, but terrible for nuanced decisions like, “Is this a competitor term?” It’s nearly impossible to program every variation of competitor keywords.

The promise of on-demand software

Sam Altman announced GPT-5, leading with “on-demand software generation.” The industry is moving beyond software-as-a-service to true on-demand software.

The new way? Write a one-paragraph spec (five minutes), give it to AI (15-minute build), then review and iterate (three minutes per change). In under an hour, you have working automation.

This new code is flexible, not just deterministic. LLMs can answer nuanced questions like, “Is this a competitor term?” with high probability. It’s the best of both worlds.

The expanding scope of automation

With vibe coding, anything you can explain to a human, a machine can build. Landing pages that follow brand guidelines? Done. Custom audience tools? Done.

Here’s the radical shift: you can now automate tasks that take just 90 minutes by hand. Build throwaway software for one-time tasks. Even if it breaks next month, it saved you time today.

What can you build with vibe coding?

You can build landing pages, microsites, interactive web apps, Chrome extensions, browser extensions, and WordPress plugins — all through simple prompts.

Available tools

Start with Claude or ChatGPT — tools you likely already subscribe to. They’re great for data analysis, calculators, and quick visualizations.

For more complex apps that need databases or login systems, use Lovable, V0.dev, Replit, or Bolt. They handle the complexity, so you don’t have to.

If you’re more technical, try Codex, Bolt.new, or Cursor. But for most people, the simpler tools handle almost everything.

Case Study 1: Seasonality analysis tool

Vallaeys asked someone on his team who had never coded to build a seasonality analysis tool. She fed PPC Town Hall podcast videos into Claude.

The process was simple: gather resources, write a prompt, give it to AI, and test it in the browser. No installation required.

The team iterated on the fly, asking for different plots and forecasting methods. In minutes, they had advanced enhancements. The AI knew where to add help text and simplify the interface because it’s trained on millions of web apps.

Case Study 2: Panel of experts tool

Vallaeys wanted multiple custom GPTs to review his blog posts in sequence, each giving feedback from its persona. Then a consolidator GPT would summarize the most common feedback into three to five bullet points.

He vibe-coded this in V0.dev by describing what he wanted. It generated a clean tool with text input, the ability to add custom GPTs, and everything worked.

Case Study 3: Chrome extension for demos

For customer demos, Vallaeys needed to blur sensitive numbers. He wanted options:

  • Fully redact or just blur?
  • Include currencies or only numbers?
  • Handle different separators?

He built a Chrome extension with all those options using simple prompts. Problem solved.

Prompting tips for success

Always include the use case. Say “seasonality tool” instead of vague terms like “time series analysis.” The AI makes better assumptions and may suggest approaches you hadn’t considered.

Ask questions: “How did you approach this?” or “Where do you store data?” It helps you learn.

Use chat mode to explore alternatives without changing the code. Ask for three approaches, pick one, go deeper, then say, “Execute that.”

The PPC audience analyzer

The audience analyzer Vallaeys’ team built is available to try. You can grab the code, add your logo, turn insights into action items — whatever you need. Just tell it what to change, and it updates.

Final thoughts: Stay competitive

Vallaeys makes one point clear: you’re not competing against AI. You’re competing against people who use it better than you do.

Try vibe coding today. Go to one of these tools and give it a single prompt. See what happens. The first time Vallaeys tried it, his mind was blown.

Now that you’ve learned something new, use it to get better at AI. That’s how you stay ahead.

💾

Learn how vibe coding help you build custom PPC tools in minutes using simple AI prompts instead of traditional coding.

How to build a context-first AI search optimization strategy

27 February 2026 at 19:00
From keywords to context- Rethinking content optimization for LLMs

AI-based discovery offers a new level of sophistication in surfacing content, without relying solely on keywords. Beyond keyword-string-first approaches, contextual and semantic elements are now more important than ever.

Optimization is no longer about just reinforcing the keyword. It’s also about constructing a retrievable semantic environment around it.

This impacts how we write, create, and think about content. It applies whether you write every word yourself or employ automated workflows.

Reframing your publishing strategy around context

Much has already been written about the concepts covered here. This discussion focuses on tying them together into a more cohesive publishing strategy and tactical approach.

If you’re already operating in a context mindset, you’re likely making these elements work for you. If you’re still using keyphrase-first approaches and want a stronger grasp of deeper contextual and semantic strategy, keep reading.

Context, semantics, meaning, and intent have long been core to optimization. What’s changed is how content is presented and discovered, particularly within LLM-based platforms.

This shift affects how context is categorized and structured across a website. It applies to site taxonomy, schema, internal linking, and content chunking and clustering.

It also means moving away from verbose word counts and getting to the point. That benefits both the machine layer and the human reader.

Keywords aren’t obsolete. But they don’t function as isolated optimization tactics. Context-led strategies aren’t new. However, they require greater attention to define what your publishing strategy means moving forward.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

Structure for a contextual-density approach

When considering the keyphrase as a multidimensional point for building semantics, it may be more productive to think of these combined concepts within a single framework. In essence, every topic exists as a semantic field rather than a word or phrase. These areas include:

  • Axis term (primary topic/keyphrase).
  • Structural context (secondary and tertiary concepts).
  • Problem context (intent).
  • Linguistic variants (stemmed or fanned phrasing).
  • Entity associations.
  • Retrieval units (chunk-level readability).
  • Structural signals (internal links, schema, and taxonomy).

While the main keyphrase is the anchor and axis point for the linguistic dimensions that surround it, almost everything else defines true performance and meaning apart from the keyword.

In other words, the sum of all the “other” words — headings, subheadings, references to related concepts, and various entities related to the keyphrase — is just as important as the keyphrase itself. This is a very basic concept in producing well-thought-out writing, but it’s now more important.

Context density and SERP-level linguistic analysis

One way to think about this shift is by comparing keyword-level linguistic analysis with search engine results page-level linguistic analysis.

SERP-level linguistic analysis isn’t new. One of the first major tools to address this concept was Content Experience by Searchmetrics and Marcus Tober.

The platform launched around 2016 — priced for enterprises — and focused on scraping the top results page for a given keyword, then averaging and weighting the other words common across high-ranking pages.

The idea was that those additional words and entities, which helped define a comprehensive set of results for a topic, would yield key semantic indicators for content performance.

These reports provided stemmed concepts, entities, and specific language modifiers to add hyper-context to the main topic.

Other tools, such as Clearscope, used different methods to achieve similar results.

In my experience, these types of analyses have been very useful for creating high-performing content.

They’ve worked well competitively and have been especially effective in linguistic areas where competitors lacked this level of analysis in their own content.

Dig deeper: Content scoring tools work, but only for the first gate in Google’s pipeline

Using secondary and tertiary keyphrases as contextual linguistic struts

Understanding this type of analysis helps you delve deeper into semantic page construction by categorizing and emphasizing ancillary language into a hierarchy, particularly in second- and third-tier levels. You can go as deep with the hierarchy as your content scope permits.

Secondary and tertiary keywords should form what I often refer to as “linguistic struts” — supporting elements that reinforce your main topic while expanding its scope and relevance.

Think of them as context stabilizers or intent differentiators for a given topic or theme. The choices you make here ultimately define the context and relevance of your content.

Each secondary keyword should serve a specific purpose within your page architecture, whether it’s introducing a new subtopic, answering a related question, or providing additional context for your primary theme.

Once you’ve defined this secondary and tertiary language, it can guide your outline and then the final writing. 

This approach applies to everything from manually written work to fully automated and synthetic processes.

Stemmed linguistics

One of the most powerful aspects of comprehensive contextual keyword optimization is its ability to capture stemmed and fanned-out searches — related queries that share common roots or concepts with your optimized keywords.

In other words, related keyphrases and searches you may not have directly optimized for within the primary topic. These types of searches can be extremely valuable, often more so than the primary keyphrase, because they reflect more refined and deliberate intent.

For example, if you’ve created a comprehensive guide for “content marketing,” your page might also rank for searches such as “implementing content marketing strategies,” “content marketing strategy implementation,” or “hire B2B content marketing expert.”

The sum of these stemmed variations often represents significantly higher-intent search volume than any individual keyword.

The more thoroughly you cover secondary and tertiary keywords, the more stemmed and fanned searches you’re likely to capture.

Dig deeper: How to use relationships to level up your SEO

Get the newsletter search marketers rely on.


High-level technical foundations for contextual emphasis

When discussing the move from a string-based strategy to a context-based strategy, it’s as much about how machines process content as it is about writing.

LLM-powered platforms evaluate context at multiple layers — how content is segmented, how topics are structurally connected, and how meaning is formally implied.

Retrieval mechanics: From pages to chunks

Large language models retrieve segments of content — referred to as “chunks” — that have been transformed into vector representations.

In simplified terms, your page is broken into retrievable units. Those units are evaluated for contextual similarity to a prompt, and the LLM selects the chunks that best align with the intent and semantic patterns in the query.

Contextual similarity emerges from co-occurring terms, related entities, problem points, and semantic density within a chunk.

If a chunk lacks contextual depth — in other words, if it simply repeats a primary term without expanding the surrounding semantic field — it becomes thin in the embedding layer.

Thin chunks are less likely to be retrieved, even if the page ranks well in traditional search.

The implication for your writing is straightforward: Getting to the point faster can be a significant advantage at both the page and site levels. It can improve machine readability and create a better human reading experience, serving multiple KPIs.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Structural context: Architecture as meaning

How your content is organized structurally also infers meaning within LLM-based discovery. Beyond providing a taxonomical hierarchy, structure acts as a contextual signal.

Architecture teaches the system how your topics relate to one another. Internal links apply inference and meaning to related topics and entities.

Taxonomy infers the semantic mapping of your connected content within a domain or across domains. URL naming and structure further signal hierarchy and topical relationships.

When a page sits within a clearly defined topical cluster and links to related concepts and subtopics, it inherits contextual reinforcement.

An LLM understands what the page says and where it lives conceptually within your broader domain.

Schema and entity context

There’s also a layer of meaning that can be formally stated through schema markup.

Schema markup and entity modeling provide explicit clarification of what something is, who is involved, and how elements relate to one another.

Where linguistic context builds meaning implicitly through unstructured writing, schema states its intended meaning through structured data.

In doing so, it formalizes entity relationships, reduces ambiguity, and reinforces identity and topic signals across platforms.

This doesn’t replace strong writing, but it strengthens it by ensuring machine-readable contextual emphasis.

In a contextual discovery environment, every technical element exists to strengthen semantic retrievability.

For a deeper dive into the technical shift in content discovery in the age of AI, I recommend Duane Forrester’s book, “The Machine Layer.”

Dig deeper: Organizing content for AI search: A 3-level framework

Moving to a context-first strategy

When you align linguistics, structure, and declaration around a clear topical axis, the strategy centers on the contextual environment.

Transitioning from a purely keyphrase-centered strategy may seem daunting at first, but it’s something you can begin doing today in how you write and research your content.

In simple terms, moving to a context-first strategy is about how you approach writing at both the page and site levels and making your content as machine-readable as possible.

The dark SEO funnel: Why traffic no longer proves SEO success

27 February 2026 at 18:00
The dark SEO funnel: Why traffic no longer proves SEO success

SEO is transitioning from rank, click, and convert to get scraped, summarized, and recommended. 

We’ve entered the era of invisible attribution known as the dark SEO funnel — where traditional top-of-funnel (TOFU) traffic is collapsing, the messy middle is getting messier, and SEO success can no longer be measured by clicks. 

Up to 84% of B2B buyers now use AI for vendor discovery, and 68% start their search in AI tools before they ever touch Google, new data from Wynter reveals. Buyers are using ChatGPT to narrow down their options and Google to verify.

If you’re still judging SEO success by traffic, you’re optimizing for a model that no longer exists. Here’s how to brace for impact. 

Defining the dark SEO funnel

Marketing leaders are already familiar with the concept of dark social — the idea that buyers share content in private channels (Slack, DMs, WhatsApp) where tracking pixels can’t see them. Dark SEO is the algorithmic search equivalent.

In dark social, a peer recommends the brand, and the buyer Googles it. In dark SEO, an LLM recommends the brand, and the buyer then Googles it.

The training data answer summaries are invisible to traditional analytics:

  • Ingestion: An LLM consumes your content and understands your entity.
  • Recommendation: A user asks a problem-aware question (e.g., “best tools for X”), and the LLM recommends your brand as a solution.
  • Verification: The user, now aware of you, goes to Google and searches for your brand name to validate the choice.

The credit conveniently goes to “direct” or “branded search.” Meanwhile, the work was done by SEO or GEO.

This is the dark SEO funnel: where discovery happens in a non-click environment, attribution gets wiped out, and SEO looks like it’s “underperforming” even while it’s actively filling the pipeline.

The role of Google has fundamentally changed. As one surveyed CMO explained:

  • “I use Google only if I have certainty about which specific software types or products I want.”

AI is for evaluating. Google is for verifying. This is a radical shift.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

The ned discovery paradigm

The strategic shift: Brand mentions vs. LLM citations

Winning in the dark funnel era requires an understanding of two types of visibility.

In traditional SEO, the goal was clicks from a blue link. In AI search, the goal is inclusion, which happens in two different ways. 

Brand mentions vs LLM citations

Brand mentions

This is when an LLM explicitly names your company as a solution.

  • Users ask: “Who are the top enterprise ABM platforms?”
  • AI answer: “The top recommendations are 6sense, Demandbase, and [Your Brand].”

You can’t “technical SEO” your way into this. It’s driven by entity strength — how often your brand appears alongside relevant topics across the web — and influenced by PR, podcast appearances, customer reviews, and what we have long coined as surround sound SEO.

Dig deeper: How to earn brand mentions that drive LLM and SEO visibility

URL citations

This is when an AI tool links to your content as a source of truth because you provided unique data or you were simply the most relevant result. 

  • Users ask: “What is a good NRR benchmark for Series B SaaS?”
  • AI answer: “According to [Your Brand]’s 2026 State of SaaS Report, the median NRR for Series B companies has dropped to 109% due to budget tightening.”

This is driven by information gain. If you publish unique data, contrarian views, and proprietary information, the AI cites you to ground its answer.

LLMs learn from the ecosystem. If you want to be recommended, you should optimize around the most relevant neighborhoods:

  • Review sites: G2, Capterra (where AI verifies sentiment).
  • Communities: Reddit, Quora (where AI verifies consensus).
  • Third-party publishers: Industry blogs and news sites.

If AI sees your brand mentioned consistently across a relevant neighborhood, it assigns you the authority to be recommended.

Get the newsletter search marketers rely on.


How to measure SEO in the dark funnel era

When traffic is no longer the north star KPI, leadership still wants proof that SEO is working. 

The strongest teams are pivoting to defensible signals that track revenue and reputation rather than just clicks. 

If brand discovery happens in AI, but the last click conversion happens on Google, your attribution model is fundamentally broken. 

Metrics to de-emphasize 

  • Broad informational traffic: “What is X” searches are now answered by AI. Losing this traffic is often a sign of efficiency.
  • Search impressions: This is tough to justify. I’ve never met a CMO that places high importance on impressions.
  • Isolated rankings: Ranking No. 1 for a given keyword doesn’t guarantee your brand will get recommended. 
  • CTR: In 2023, Michael King accurately predicted the 10 blue links will get fewer clicks because the AI snapshot will push the standard organic results down. The 30-45% click-through rate (CTR) for Position 1 will drop precipitously.

Metrics to elevate 

  • Recommendations from LLMs: Are you visible for high-intent, comparison queries (e.g., “best CRM for enterprise”)? These are the queries users perform after the AI has educated them.
  • Branded traffic as a leading indicator: This is a great proxy for dark funnel success. Non-branded visibility leads to brand searches in this new era. And branded searches lead to conversions.
  • Product and solutions page traffic: Generally, this content is less volatile and less susceptible to traffic losses — therefore performance should remain level. 
  • Landing page conversion rates: If you’re getting less traffic, but higher-intent visitors, there should be an improvement in conversion rates. 
  • Self-reported attribution: This isn’t always perfect, but it’s directionally reliable. When website leads fill out forms asking “how did you hear about us?” they should be citing things like “online search” or “ChatGPT” or “Perplexity.”

The most powerful slide you can show in a meeting is this:

  • Informational traffic: ↓ (Declining)
  • Demo conversion rate: ↑ (Rising)
  • Pipeline: → (Stable or growing)

That isn’t a decline. That is what I call the Great Normalization of SEO. You are trading high-volume noise for high-intent signal.

Dig deeper: How to get cited by ChatGPT: The content traits LLMs quote most

Brand visibility is the trophy, traffic is just the byproduct

To thrive in the dark funnel era, you must stop playing the old SEO game.

The brands that adapt aren’t chasing cheap clicks. They will dominate inclusion, recommendation, and commercial intent— even as the modern SEO funnel grows darker.

Here’s your mandate for 2026:

  • Narrow your focus: Track 30-50 high-intent money prompts instead of thousands of vanity keywords.
  • Surround sound marketing: Invest in third-party visibility and narrative control (surround sound SEO), not just your own domain.
  • Information gain: Aim to blend search-driven topics with opinionated, research-backed, information-gain insights.
  • Highlight revenue metrics: Report on the organic contribution to pipeline, not just click volumes. 

As we saw with dark social, CTR and attribution from social platforms declined with the rise of zero-click marketing. It’s now time to concede defeat on traffic as we apply those same learnings to dark SEO. 

How to become an SEO freelancer without underpricing or burning out

27 February 2026 at 17:00
How to become an SEO freelancer without underpricing or burning out

Many SEO professionals enter freelancing for the same reason: freedom. They dream of fewer meetings, flexible hours, and the ability to choose their own projects. 

What they don’t expect? Freelancing isn’t just “SEO without a boss.” It’s SEO plus sales, scoping, contracts, billing, and client management. Without those essential pieces, even the strongest SEOs struggle to make freelancing sustainable. 

We’ll break down each step in this process to bridge the gap between dream and reality. By the end of this article, you’ll know exactly how to build a sustainable freelance practice so you can become a digital nomad answering client emails and enjoying mojitos from a beach in Bali (if you so choose). 

Before you get started: Understand what you’re actually building

Let’s make one thing clear: SEO freelancing doesn’t look like attending quarterly planning meetings to fight for budget or sending another sad Slack to the product team asking them to prioritize your recommendations.

In that scenario, you’re closer to a contractor embedded in someone’s workflow than an independent freelancer. And that distinction matters. It determines how much control you have over your time, scope, and pricing. 

SEO freelancing typically includes:

  • A clearly scoped engagement with a defined start and end.
  • Ownership over how the work is delivered, not just what’s delivered.
  • Pricing tied to outcomes or deliverables instead of availability.
  • The ability to say no when a project doesn’t fit.

So before you quit your job to take on your first client, make sure you know exactly what you’re signing up for. 

Step 1: Pick one thing and get unreasonably good at it

Now that you know exactly what your SEO freelancing gigs should look like, here’s the secret sauce to how some freelancers can charge $200/hour while others still struggle to get $40: 

Specialization. 

Generalist freelancers compete on availability and price. “I do SEO” means you’re fighting everyone who just “does SEO.” You win projects by being there when the client needs someone — and your price is what they’re willing to pay. 

Specialists, on the other hand, compete on expertise, speed, and pay-off. An expert who “audits JavaScript rendering issues for React migrations” will face a much smaller pool of competitors. Because of that, you can price based on what you’ve delivered. 

When it comes to SEO freelancing, those high-value specializations look like: 

  • Technical SEO audit for site migrations: Companies budget for migrations because they’re terrified of what could go wrong. They pay well for any de-risking an expert can offer. 
  • Programmatic SEO implementation: Sites make money from organic traffic at scale, so they understand well the ROI of investing in your services. 
  • Technical enterprise ecommerce SEO: These high-stakes sites with complex templates, faceted navigation, and crawl budget demand high budgets and timely deliverables. 
  • SEO that actually gets you ChatGPT visibility: Yes, GEO is a selling point that everyone wants to buy, and yes, offering that specific skill (and backing it up with data) will put you on the map. 

What doesn’t work? 

  • SEO “guru” positioning: Claiming broad expertise without clearly defining the problem you solve or the outcome you deliver. 
  • Lack of specialization: Offering every SEO service under the sun with no defined specialty makes it harder for prospects to understand where your expertise actually lies. 
  • Competing on price: When price is your main differentiator, you’re positioning yourself as interchangeable instead of valuable. Experience-driven specialists rarely win or lose work based solely on their hourly rate. 

Most freelancers resist freelancing, thinking, “What if I turn away work?” 

You are! That’s the point. Turning down misaligned work is how you protect your time, pricing, and the quality of your work. 

Dig deeper: How to keep your SEO skills sharp in an AI-first world

Step 2: Turn that one thing into something you can sell 100 times

The line between “I’ll do an SEO strategy customized to your needs” and “I deliver a technical SEO strategy with these eight components, this deliverable format, and this timeline” is productization. It’s the difference between delivering consistent, repeatable work and reinventing the wheel for every new client. 

Many freelancers misstep here by customizing too early. A client might say, “We also need help with content,” and you, as a freelancer, reply with “Sure, I can help with that.” Now you’re not delivering a productized audit — you’re doing custom work with an undefined scope. 

Here’s what you need to define to keep your deliverables consistent: 

  • Scope: What’s included in the work. 
  • Deliverable format: What the final product should look like (e.g., prioritized spreadsheet, slide deck, kickoff call). 
  • Timeline: Define this at the very least as starting from the moment the client signs your proposal. 
  • Price: We’ll get into this can of worms in a second. 

Depending on the services you’re offering, you’ll also want to specify: 

  • Content audits.
  • Competitive analysis. 
  • Keyword research.
  • Implementation support. 
  • Ongoing monitoring. 
  • Additional stakeholder presentation.

The key to building out a strong productized proposal is this: you cut back on ambiguity. 

The prospect either needs what you’re offering, or they don’t. If they need more, you can follow up with another proposal including the additional pricing. 

Tip: If you do have a client asking, “Can you also look at our blog content, subdomain, redirects, or something that’s outside of the scope of this current project,” you don’t have to say no. 

You can say, “Yes, but that’s another project that I’ll need to scope out.” Just make sure you say anything but “Sure, I can take a quick look.” Resist. 

Dig deeper: How to build lasting relationships with SEO clients

Get the newsletter search marketers rely on.


Step 3: Price it like you’re running a business

Arguably, this is the trickiest side of freelancing. It can be hard to put a price on your time and expertise — and even harder to defend your pricing while selling your services.

There are three pricing models you can try here: hourly, project-based, and retainer. Most start with hourly since that’s the easiest to understand, and yes, that is a bit of a trap.

Hourly pricing: Good for beginners, terrible for experts

Setting an hourly rate makes sense when you’re starting out and aren’t sure how much to charge. Simply take your day job, narrow down how much you get paid by the hour, and think about how much your benefits are worth to you. Add all that together, and boom! Hourly rate.

For example, say you got paid $100,000 at your full-time job. That’s about $48 per hour. And the average cost per hour for private industry benefits is about $13. That means if you want to make exactly what you were before, you’ll need to be paid at least $61 per hour.

In practice, SEO freelance rates range from $75 to $200 per hour, though entry-level freelancers might start closer to $50. Consider your experience and expertise, and price yourself carefully so you don’t get locked into a too-low rate.

Hourly rate is great to start, but it falls short when you’re good at your job. You’re being rewarded for working slower and being penalized for getting better at your job.

Project-based pricing: The model for productized work 

Once you’ve productized your products, you can start using project-based pricing. If you’ve delivered the same audit 15 times, you know how much work it takes you — and you know how much it’s worth.

The client doesn’t care if something takes you 20 hours or 15. They care about getting a quality deliverable in a timely fashion.

But it can be hard to get out of that hourly mindset. Here’s how to price projects when you’re starting out with freelancing:

  • Estimate how long the work will take you (or go with your best guess if you’ve never done it).
  • Multiply that by 1.5 times to account for communication overhead, revisions, and unexpected complexity.
  • Track actual time spent (yes, even though you’re not charging by the hour).
  • Deliver the project.
  • Adjust pricing for the next client based on real data (and client results).

After your first five projects, you’ll know your actual costs. Up until then, you’ll be making educated guesses, but that’s OK. Everyone starts by guessing. 

Tip: Remember, the thing you’re charging for here is your knowledge, not your time. What the client is paying for is the results you offer. Always tie your work to how it can help your client achieve their goals. No one can put a price tag on exceeded KPIs. 

Retainer pricing: Useful for recurring work, but dangerous without boundaries 

Retainer pricing makes sense when the client needs consistent monthly deliverables, such as technical reviews, advisory support, and optimization recommendations

You just have to be careful here to avoid scope creep. “We’re paying you $5,000 a month” can quickly turn into “Can you help with this product launch, this email campaign, this competitive analysis?” Guard your time wisely.

Here’s how to structure your retainers so they work for you:

  • Define the exact monthly deliverable: Clearly outline the tasks you’ll be working on each month. For example, “one technical audit per month” or “three page reviews a month.” 
  • Set rollover limits: Explain what happens if tasks are put to the wayside or projects get put on pause. This might look like saying “unused hours expire after 60 days” or “a maximum rollover of one month’s unused hours.” 
  • Exclude ad hoc requests: Clearly note that additional projects require separate proposals. 

For example, say you have a client who pays $6,000 a month for “monthly technical SEO review and eight hours of advisory support.” 

  • Month 1: The client uses six hours. Those two unused hours roll into month two. 
  • Month 2: They use 10 hours (unused two hours plus standard eight hours). 
  • Month 3: The client asks for a content audit. That project is separate and has its own pricing. 

The best path here for a new SEO freelancer? Start with project-based pricing for your core offerings. Add retainers only after you’ve delivered the same project multiple times and you know exactly what you’re committing to. 

Tip: Only offer retainers when you know you can firmly hold a client to a set scope of work. Be confident in what you’re selling and how long it takes to deliver, so you make the best use of your time. 

Dig deeper: 7 ways to increase SEO revenue without losing clients

Step 4: Build systems before you’re underwater

The key to keeping all of this consistent? Systems. 

As a freelancer, you are the project manager, account manager, and delivery owner. Systems are what keep work moving when no one’s checking in on you. 

Here’s what you need to create a solid system so nothing slips through the cracks: 

  • Client onboarding. 
  • Email (follow-ups and replies).
  • Billing.
  • Contracts.
  • Deliverable templates.
  • Offboarding.

Client onboarding: Get everyone up front

The biggest delay to any project? Waiting on access for tools, documentation, and basic questions. The right onboarding process means you can hit the ground running. 

Here’s what you should always ask for before work starts: 

  • Tool access: Google Search Console, Google Analytics 4, crawl tool permissions, CMS login.
  • Stakeholder contacts: Who approves deliverables, who answers technical questions, who handles billing.
  • Project context: Known issues, previous SEO work, business priorities, previous project timelines (migration, updates, product launches). 

You can get this without seven days of email tennis. Just send over an immediate request for this information, and don’t schedule any next steps until you have what you need. 

Template everything here. Each client gets the same questionnaire and contract structure. 

Contracts

You know what every freelancer loves? Getting paid. You know what you need to get paid? Getting it in writing.

Set your contract terms ahead of time so you don’t just hit a prospect with “uh” when they ask you how much and when. Here’s what you should have prepared:

  • Payment terms: Common options include 50% upfront and 50% on delivery for project work, or monthly invoicing for retainers and recurring work. Choose a structure that protects your cash flow while remaining reasonable for your clients.
  • Deliverable format and timelines: Net-30 or Net-14 are standard terms here. They’re just fancy ways of saying you get paid thirty days or two weeks after you bill.
  • Communication expectations: Explain the meeting cadence, preferred channels, and response times to avoid surprises.
  • What’s not included in your scope: Just so everyone is completely clear on what work is being done and what isn’t.

And don’t feel married to the first contract term you define. Be flexible. That’s the joy of being a freelancer — you can always change things up when you need to. 

You can either Google Docs your way to success here, or you can look into investing in tools: 

  • Contract signature: PandaDoc or DocuSign.
  • Invoicing and payment tracking: Wave, FreshBooks, or Bonsai.

Note: Pick one of each, use it for every client. Don’t switch unless you have a reason. 

Deliverable templates

Deliverable templates save hours of formatting. It means you don’t need to mentally go through your checklist of everything you need to review. You can just look at a blank template of what you’ve done in the past and move forward.

Here are some good examples of templates to have on hand:

  • Audit spreadsheet with consistent columns: Include the issue, location, impact (high, medium, low), effort to fix (usually in hours), priority, and any additional notes.
  • Executive summary templates: This should just be how you break things down for the client in layman’s terms.
  • Delivery email template: This offers next steps and support window details.

The goal here is to keep things consistent across clients. You’re providing the same quality work every time, no matter how busy you are.

Communication

Clients don’t need daily check-ins. They need to know the project is moving forward and nothing important is blocked.

What that looks like depends on the client’s needs. It could be: 

  • Weekly async updates via email: Explain what was completed this week, what’s coming up next, and what’s blocked.
  • Biweekly or monthly calls: Explain the same things, but this time over the phone. You should also schedule a call if you’re doing a kickoff or delivering a project.
  • Monthly emails: This is better for hands-off clients that you trust (and trust you) to get things done.

Note: If a client is pushing for daily Slack access or unscheduled calls, review your scope and pricing. You can always update your scope of work if new needs arise. 

Offboarding

No one likes to see a client go, but how you handle parting is key to making a positive, lasting impression. Make sure to include: 

  • Final deliverable handoff: This should include the rest of your work and a video walkthrough if you didn’t have a chance for a call. 
  • Transition documentation: If you were working with another team to implement your recommendations, provide guidance on how to implement changes and include any technical context they’ll need to know. 
  • Post-project support window: Define a clear support period (e.g., “two weeks of email support for clarification questions about the deliverable”). After the window, additional support is a new engagement. 
  • Request feedback: Ask for a testimonial or LinkedIn recommendation while the work is fresh. Most freelancers wait too long. 

Make sure to document what you’ve learned about yourself, the client, and your process once things are done. Think about what went well, what went poorly, and what to charge your next client for similar services. 

Dig deeper: 12 tips for better SEO client meetings

Avoid these pitfalls

Most freelancers go back to full-time employment because they feel burnt out, underpaid, and overworked. 

Those who build a sustainable career treat freelancing like a business, not just a flexible job. Yes, drinking your mojito in Bali is fun — but you still need to answer client emails within 24 hours, even when you’re off the clock. 

The biggest pitfalls that almost all beginner SEO freelancers fall into are: 

  • Saying yes to misaligned projects: Beginner freelancers are usually worried about cash flow, but saying yes to a project that doesn’t fit is what gets you stuck in a feast-famine cycle where short-term cash flow decisions prevent you from building stable, repeatable work. 
  • Delivering different things for each project: You can’t optimize what you don’t understand. Keep your offering consistent so you know what works, what doesn’t, and what’s just a client quirk. 
  • Starting from scratch with each client: Every new client should feel easier. If onboarding Client No. 5 feels as chaotic as Client No. 1, you need a better system (or just any system). 
  • Pricing for payment and forgetting sustainability: Pricing too low to “get your first client” can get your legs under you, but it’s not how you stay in freelancing. It’s better to work on two well-priced projects than five underpriced ones. Carefully judge your workload — and savings — so you can hunt for the right client. 

What you’re actually building as a successful SEO freelancer

Freelancing isn’t just “SEO with flexible hours.” It’s a service business where you define the offering, set the terms, and manage the business. 

If that sounds like more work than having a boss, you’re right. Freelancing means trading predictable employment for control over everything: scope, pricing, schedule. Some people thrive on that trade because they get to be their own ultimate manager. Others realize they’d rather someone else handle that for them. Both are valid choices. 

The key here is if you’re going freelance, treat it like the business it is:

  • Pick a specialization. 
  • Turn it into a repeat project.
  • Price it properly.
  • Build systems that scale.
  • Say no to everything that doesn’t fit.

That’s the framework. The rest is execution, iteration, and always improving the parts of the business that speak to you — be that SEO audits, content strategy, link building, or even client management — to build something sustainable. 

The Data Doppelgänger problem by AtData

27 February 2026 at 16:00

Somewhere inside your CRM is a customer who does not exist.

They open emails at impossible hours. They redeem promotions with machine-like precision. They browse product pages across three devices in under five minutes. They convert, unsubscribe, re-engage and transact again. On paper, they look highly active. In reality, they may be a composite of behaviors stitched together from AI assistants, shared accounts, recycled addresses, autofill tools and automated workflows.

This is the Data Doppelgänger Problem. And it is about to become one of the most expensive blind spots in modern marketing.

For years, identity resolution was framed as a hygiene issue. Clean the data. Remove duplicates. Suppress invalid records. That work still matters. But the ground has shifted. Today, the bigger risk is not dirty data. It is convincing data that is wrong.

AI agents are no longer theoretical. Consumers are using them to summarize emails, compare products, track prices, fill forms and in some cases complete purchases. Shared credentials remain common across households and small businesses. Browser privacy changes have pushed attribution models into probabilistic territory. Add subscription commerce, loyalty programs and cross-device behavior, and you begin to see the pattern.

One person can generate multiple digital identities. Multiple actors can generate activity that appears to belong to one person. What you see in your dashboards may not reflect a human with consistent intent, but a digital echo assembled from overlapping signals.

The result is not just noise. It’s distortion.

When high engagement lies

Most marketing systems reward engagement. Opens, clicks, transactions and recency are treated as proxies for value. But what if the engagement is partially automated?

Email clients increasingly prefetch content. AI tools summarize messages without requiring a human to scroll. Assistive shopping agents monitor price drops and trigger interactions on behalf of users. To your analytics layer, these actions can look identical to high-intent behavior.

Now layer in recycled or repurposed email addresses. A dormant account gets reassigned by a provider. A corporate alias forwards to multiple employees. A consumer rotates through alternate emails to capture new user discounts. On the surface, these look like legitimate records. Underneath, the identity is unstable.

You may be optimizing campaigns around engagement that doesn’t reflect loyalty. You may be suppressing records that are valuable but appear inactive because their activity is fragmented across identities. You may be feeding machine learning models with signals that only compound the errors.

This is where seasoned professionals feel the frustration. The dashboards are clean, segments are defined and the attribution model runs on schedule. Yet outcomes drift, conversion rates plateau and fraud creeps in through legitimate-looking channels. Acquisition costs rise without a clear explanation.

The problem is not effort. It is identity confidence.

Doppelgängers create operational risk

The Data Doppelgänger Problem is not limited to marketing efficiency. It crosses into risk, compliance and revenue protection.

Promotional abuse is often framed as external fraud. In reality, much of it exploits weak identity resolution. A single individual can appear as multiple new customers. Conversely, multiple individuals can appear as one trusted account. Loyalty points are pooled, discounts are stacked, and survey data becomes unreliable.

As AI agents become more capable, this risk becomes harder to detect. An automated assistant acting on behalf of a legitimate customer is not inherently fraudulent. But it can blur behavioral signals that historically differentiated genuine intent from scripted abuse.

Traditional rules-based systems look for anomalies. The next wave of risk will look normal.

If you cannot distinguish between a stable, persistent identity and a composite one, you cannot confidently calibrate friction. Add too much friction and you punish real customers. Add too little and you subsidize exploitation.

The only sustainable path is to move beyond static identifiers and into continuous identity validation. Not just confirming that an email address is deliverable, but understanding how it behaves over time, how it connects to other digital attributes, and how it fits within a broader activity network.

The collapse of the Golden Record

Many organizations still pursue a single source of truth. A golden record that reconciles identifiers into one master profile. The aspiration is understandable. But in a world of AI mediation and shared signals, the notion of a fixed record is increasingly unrealistic.

Identity is not a snapshot. It is a moving target.

The more relevant question is not whether you can unify data into one profile. It is whether you can quantify how confident you are that the activity associated with that profile represents a coherent individual.

That shift sounds subtle. It is not.

When identity is treated as binary, either matched or unmatched, you miss nuance. When identity is treated as a spectrum of confidence, you gain leverage. You can weight signals differently. You can suppress low-confidence interactions from modeling. You can prioritize outreach to high-confidence segments. You can apply graduated friction to transactions that sit in ambiguous territory.

This is where data becomes a strategic asset rather than a reporting function.

From volume to validity

Marketing technology has long rewarded scale. Bigger lists, broader reach and more signals. But scale without validation creates false precision.

The Data Doppelgänger Problem forces a harder question. Would you rather have ten million records with unknown stability, or eight million records you understand deeply?

The brands that win over the next few years will not be those with the most data. They will be those with the most defensible data.

Defensible means continuously validated. Network-informed. Contextualized against real patterns of activity. Integrated across marketing, analytics, and risk workflows so that improvements in one area compound across the organization.

When identity confidence increases, targeting improves. When targeting improves, engagement quality strengthens. When engagement quality strengthens, attribution stabilizes. When attribution stabilizes, forecasting becomes more reliable. And when forecasting improves, budget allocation becomes less political and more performance-driven.

This compounding effect is measurable. It is also fragile. Feed unstable identities into the loop and the entire system drifts.

What Seasoned Professionals Should Be Asking

If you are leading marketing, analytics or risk, the uncomfortable questions are no longer about data access. They are about data integrity at scale.

How many of your active profiles represent coherent individuals?

How often are identities revalidated against fresh activity?

Can you detect when one identity splits into several, or when several collapse into one?

Are your fraud controls calibrated to behavior, or to assumptions about behavior that may no longer hold?

These questions do not require panic. They require evolution.

This is not a crisis. It is a signal that the digital ecosystem has matured. Consumers are delegating more tasks to software. Devices are proliferating. Privacy changes are fragmenting identifiers. This is the environment we operate in.

The brands that adapt will treat identity not as a static field in a database, but as a living construct that must be observed and refined continuously. Utilizing advanced activity networks to anchor identity in its current reality.

Those that do will spend less on wasted acquisition. They will protect margins without alienating customers. They will trust their analytics because they understand the confidence behind the numbers.

And perhaps most importantly, they will know who they are actually engaging. Because somewhere in your CRM, there is a customer who does not exist.

The question is whether you can find them before they find your budget.

The latest jobs in search marketing

27 February 2026 at 23:48
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description As the Web Writer/Content Specialist at Interactive Strategies, you’ll help shape digital experiences for organizations doing meaningful, high-impact work. We partner with leading nonprofits, associations, and B2B organizations to solve complex problems through smart strategy and clear storytelling. We’re a people-first, award-winning digital agency where good ideas matter, great execution is expected, and […]
  • position summary Merjent, Inc. seeks a creative and experienced Marketing and Digital Communications Specialist to join our Growth team. The successful candidate will support the Marketing Manager with company-wide marketing initiatives. This position will focus on enhancing Merjent’s brand presence through digital channels, including the website, social media, and online campaigns, and through the design […]
  • Job Description Cortica is looking for an innovative, results-driven Performance Marketing Manager to join our growing team! This role is responsible for leading the strategy, execution, and optimization of all paid and performance-driven digital marketing channels to achieve customer acquisition, engagement, and revenue growth goals. This role combines analytical rigor, strategic thinking, and cross-functional leadership […]
  • *This role is remote and open to Latin America, Canada and Europe, working in EST* Description Hi! We’re LinkGraph, an SEO software company (and full-service digital agency) focused on engineering products and services that help websites improve their performance on Google. We are a rapidly growing organization with clients ranging from Fortune 500 companies to […]
  • Job Description Digital Marketing Manager Location: St. Louis Park, MN Position Summary: We are seeking a Digital Marketing Manager (DMM) that will lead the development and execution of digital marketing strategies to drive lead generation, brand awareness, and engagement. This role requires a data-driven marketer with strong expertise in digital channels, analytics, and content strategy. […]
  • Discover Your Next Adventure with Upfront Plumbing Drains Heating and Air! THE OPPORTUNITY: Position: Digital Marketing Director Location: Salt Lake City, UT Pay: Competitive salary of $40,000 – $60,000/year (based on experience and qualifications) Benefits: Enjoy a host of perks, including paid time off (PTO), paid major holidays, professional development opportunities, recognition, feedback, mentorship, and […]
  • About MedEquip Shop MedEquip Shop is a growing medical equipment provider with a strong presence in retail and rentals, offering a wide range of products for seniors and caregivers. We’re seeking a talented Digital Marketing Manager to help us scale our online and in-store sales and establish a larger footprint in the Houston area. We […]
  • Upgrow is seeking an organized, motivated, and creative SEO Director to lead our growing digital marketing agency in San Francisco, CA. You will oversee and manage SEO projects involving research, planning, project management, analytics, optimization, linkbuilding, and writing. This role works directly with clients and includes account management, as well as managing 2 direct reports. […]
  • About Yami: Founded in 2013, Yami’s mission is to bring the world closer for everyone to experience and enjoy. We make it easy to discover exciting flavors and trending products from Asia. Named Inc. Magazine’s fastest growing start-up on the ”Inc. 500 List.”, we’re committed to connecting people with authentic food, beauty, home, and wellness […]
  • (un)Common Logic This is a hands-on, client-facing multi-channel performance role with primary emphasis on PPC and strategic involvement in SEO initiatives. (un)Common Logic is a digital marketing agency based in Austin, Texas, founded in 2008 originally as 360Partners. Our talented team of experts relentlessly strives for excellence in marketing performance and exceptional customer service. We tackle […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • We are NoGood is an award-winning, tech-enabled growth consultancy that has fueled the success of some of the most iconic brands. We are a team of growth leads, creatives, engineers and data scientists who help unlock rapid measurable growth for some of the world’s category-defining brands. We bring together the art and science of strategy, […]
  • This is a remote position. Are you a versatile marketer with a problem-solving mindset and a passion for driving results? If you’re analytical, adaptable, and thrive in a dynamic remote environment, we’d love to hear from you! At Scopic, we do marketing not only for our own global software brand but also for clients through […]
  • TMI is a global-facing, independent digital agency built for performance, agility and partnership. We create award-winning, data-led campaigns across the globe for clients in e-commerce, finance, iGaming, beauty, skincare and grocery. TMI is guided by the core values of unified collaboration, data-centric strategy, proactive engagement, inclusivity, authenticity and integrity. We challenge convention and deliver exceptional […]
  • Job Description Are you the kind of person who watches an ad on YouTube and immediately starts analyzing the hook, the targeting, and the call to action? Do you love the balance of creative storytelling and performance optimization? Do you get excited about turning video content into measurable business results? If so, this role is […]
  • Searchbloom is an established and growing company offering the opportunity to design and execute innovative PPC strategies, develop new products and services, and grow as a thought leader in the industry. Searchbloom is seeking an experienced Enterprise PPC Specialist . Join our team of digital marketers who collaborate seamlessly to drive exceptional results for our […]

Other roles you may be interested in

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Search Engine Optimization Manager, Method Recruiting, a 3x Inc. 5000 company (Remote)

  • Salary: $95,000 – $105,000
  • Lead planning and execution of SEO and AEO initiatives across assigned digital properties
  • Conduct content audits to identify optimization, refresh, pruning, and gap opportunities

Senior Manager, SEO, Kennison & Associates (Hybrid, Boston MA)

  • Salary: $150,000 – $180,000
  • You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
  • Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌