Starting July 1st, Meta will add “location fees” to ad buys targeting users in six countries — effectively offloading the cost of European digital services taxes onto the advertisers themselves.
The numbers. Fees will match each country’s digital services tax rate:
France, Italy, Spain: 3%
Austria, Turkey: 5%
UK: 2%
How it works in practice. Per Meta’s email to advertisers — “$100 in ads delivered to Italy will cost $103, plus any applicable VAT on top of that.”
The fine print. The fees apply to where the ad is delivered, not where the advertiser is based — meaning a US brand running campaigns targeting French users will pay the French rate regardless.
Why we care. This is a direct, unavoidable cost increase hitting European campaigns on July 1 — with no opt-out. If you’re running ads targeting users in France, Italy, Spain, Austria, Turkey, or the UK, your effective CPM and CPA benchmarks are about to get more expensive, which means existing budgets will stretch less far and current ROAS targets may no longer be achievable without adjustment.
And since the fee is based on where the ad is delivered rather than where you’re based, even non-European brands aren’t off the hook.
The big picture for advertisers. This isn’t unique to Meta — Google and Amazon already charge similar pass-through fees. But it’s a meaningful shift in how European ad budgets need to be calculated, and campaign managers should revisit their cost models before July 1 to account for the added overhead across affected markets.
The backdrop. Digital services taxes have been a flashpoint between Europe and Washington. The Trump administration has threatened retaliation against European firms over the levies — adding geopolitical uncertainty to what is already a complex compliance landscape for global advertisers.
Positive coverage creates exposure, authority, trust, and often valuable backlinks.
But for many people, the path to getting it is a mystery. Others believe myths about how it works.
Some believe you have to be at the very top of your industry before the media will care about your story.
That’s simply false.
Others believe you can simply buy your way into media coverage.
There’s a small degree of truth to that.
You can find contributors willing to feature you (or your client) for a fee, but this blatantly violates every outlet’s contributor guidelines. You may land the feature, but editors will eventually find out.
What happens then?
First, the article gets deleted or any mention of you and your links gets removed. Then, the contributor gets removed from the platform and blacklisted in the media industry. Finally, you get blacklisted too.
Good luck getting featured again. It won’t happen.
The reality is that you can get featured in the media.
You just need to understand the process and execute it consistently.
Develop your story
You probably have a great story — you just may not realize it yet.
The media has to produce a constant stream of content. If you have a strong story, you’re already one-third of the way to getting featured.
Let’s start with what doesn’t make a great story.
You’re the first.
You think you’re the best (everyone thinks that, and no one cares except your mother).
You’re the biggest.
You want to change the world.
So what does make a great story?
Like the answer to most SEO questions: it depends.
A great story starts with an actual story.
You have to explain, in an engaging way, why anyone should care about what you have to say.
For example, I often tell the story of how I used PR to rebuild my success after being on my deathbed.
I explain that my agency’s specific PR approach comes from the exact process I used to rebuild my own business — and that I want to give others the same advantage.
And my story is easily verifiable.
But you don’t need a life-or-death struggle to have a compelling story.
You just need a story that shows a deeper purpose. A mission. Something people can get excited about and care about.
Craft your pitch
Even with the best story in the world, you still need an effective pitch.
Your pitch has to cut through the noise and grab attention. Journalists, producers, and others in the media are inundated with pitches — many receive hundreds every day. Your pitch has to tell your story clearly and quickly, and motivate them to respond.
Easier said than done.
Most pitches are sent by email, so most people start with the subject line. That’s the exact opposite of what you should do.
Start with the body of the email. There’s a reason for this, which we’ll get to shortly.
Find a way to connect your story to current events. If a topic is already popular in the media, other outlets are more likely to cover it.
But remember: while the story involves you, it isn’t about you.
You have to pitch from the perspective of what the audience wants. The journalist’s, editor’s, or producer’s needs come second, and yours come in a distant last place.
Sorry, that’s just the way it is.
You need to distill your story and why the audience should care into a few sentences. You can add a little more detail after that, but keep it short. If they see a wall of text, they’ll likely delete your email.
Once your pitch is solid, write your subject line. It should be short, punchy, and aligned with your pitch.
Short and punchy matters because the subject line determines whether they open your email.
If the pitch doesn’t align with the subject line, they’ll likely delete the email without reading it. Getting attention means nothing if they don’t read the message.
I once saw a publicist use a subject line that certainly grabbed attention, but it had zero positive impact and damaged his reputation.
What was it?
“Fuck You!”
Bottom line: your pitch must quickly and clearly show the value the audience will get, and your subject line must grab attention in a positive way while aligning with the pitch.
Build your media list
PR isn’t a numbers game.
Yet people treat it like one. They buy or compile lists of media contacts and blast their pitch to anyone they can find.
That’s no different from spam emails selling generic Viagra.
Success comes from sending the right pitch to the right people at the right time.
Finding the right people means identifying journalists, producers, and other media contacts who cover the types of stories you’re telling.
Several expensive tools can help you find these contacts and their information. But you can often find the same information with a search engine and social media. In fact, that’s how I built most of my media relationships.
As for the right time, that’s largely a matter of chance.
Send your pitch
There’s no magic formula.
The time of day you send your pitch doesn’t matter much unless it’s extremely time-sensitive, which most business topics aren’t. Producers often check email at certain times, but they won’t touch it while preparing for or running their show.
Now here’s something you need to avoid:
Don’t bombard them with follow-up emails!
For truly time-sensitive stories, it may be acceptable to follow up within the same week. In most cases, though, wait about a week. Frequent follow-ups will annoy journalists, producers, and other media contacts.
Stop after two or three follow-ups. If you haven’t received a response by then, they likely aren’t interested in the story.
Try not to take it personally. They probably won’t tell you it’s not a fit. Given the sheer volume of pitches they receive, responding to every one would be a full-time job.
Nurture your relationships
Most of your pitches won’t result in media coverage.
The problem is that most people stop after a rejection or no response.
That’s crazy to me.
I can’t tell you how many times I’ve heard “no” or received no reply before finally landing a feature.
It happened because I didn’t pitch once and move on. These contacts all started as strangers, but I invested time and energy in building real relationships.
As a result, when I reach out, they open and read my emails because I’m not a stranger. Those relationships make it far easier to turn a pitch into media coverage.
Most initial outreach won’t lead to coverage. But if you nurture the right relationships, you’ll eventually build a network of responsive press contacts.
Perplexity AI must stop using its Comet browser agent to make purchases on Amazon. A federal judge sided with Amazon in an early ruling over AI shopping bots.
Why we care. The case targets a core promise of AI agents: completing tasks like shopping on a user’s behalf. If courts restrict how agents access sites, AI agents could face strict limits when interacting with logged-in accounts on major websites.
What happened. U.S. District Judge Maxine Chesney granted Amazon a preliminary injunction Monday in San Francisco federal court.
The order blocks Perplexity from using its Comet browser agent to access password-protected parts of Amazon, including Prime subscriber accounts.
Chesney wrote that Amazon presented “strong evidence” that Comet accessed accounts “with the Amazon user’s permission but without authorization by Amazon.”
The ruling also requires Perplexity to destroy any Amazon data it previously collected.
Catch-up quick. Amazon sued Perplexity in November, accusing the startup of computer fraud and unauthorized access. The company said Comet made purchases from Amazon on behalf of users without properly identifying itself as a bot.
What’s next. The order is paused for one week to allow Perplexity to appeal.
What they’re saying. Amazon spokesperson Lara Hendrickson told Bloomberg (subscription required) the injunction “will prevent Perplexity’s unauthorized access to the Amazon store and is an important step in maintaining a trusted shopping experience for Amazon customers.”
Google Ads is rolling out auto end screens — a new feature that appends an interactive, auto-generated card to the end of eligible video ads to nudge viewers toward a conversion.
How it works. An interactive screen appears for a few seconds immediately after the video finishes playing.
Content is auto-populated from campaign data — app name, icon, price, and a direct install link for app campaigns
End screens appear by default on eligible ads, requiring no setup from advertisers
Why we care. Advertisers no longer need to manually build post-roll calls-to-action. This feature is on by default and changes the end of your video ads — and if you’ve already built custom YouTube end screens, they’ll be overridden without any warning. With end screens being the last thing a viewer sees before deciding to act, losing control of that moment matters.
And with broader expansion planned, now is the time to understand how it works before it reaches more of your campaigns.
The catch. Enabling auto end screens in Google Ads overrides any manually added YouTube end screens — meaning advertisers who’ve already customized their YouTube end cards will lose them.
Current limitations. The feature is only available for in-stream ads running in mobile app install campaigns, with broader expansion planned but not yet dated.
What stays the same. Auto end screens don’t affect billing or view counts — they’re purely an added engagement layer tacked on after a full video view.
Next steps. Advertisers running mobile app install campaigns should audit their video ads now — check whether auto end screens are serving as expected and verify that any manually added YouTube end screens aren’t being silently overridden. As Google expands the feature beyond app installs, it’s worth establishing a review process early so campaigns are ready when eligibility broadens.
The DSCRI-ARGDW pipeline maps 10 gates between your content and an AI recommendation across two phases: infrastructure and competitive. Because confidence multiplies across the pipeline, the weakest gate is always your biggest opportunity. Here, we focus on the first five gates.
The infrastructure phase (discovery through indexing) is a sequence of absolute tests: the system either has your content, or it doesn’t. Then, as you pass through the gates, there’s degradation.
For example, a page that can’t be rendered doesn’t get “partially indexed,” but it may get indexed with degraded information, and every competitive gate downstream operates on whatever survived the infrastructure phase.
If the raw material is degraded, the competition in the ARGDW phase starts with a handicap that no amount of content quality can overcome.
The industry compressed these five distinct DSCRI gates into two words: “crawl and index.” That compression hides five separate failure modes behind a single checkbox. This piece breaks the simplistic “crawl and index” into five clear gates that will help you optimize significantly more effectively for the bots.
If you’re a technical SEO, you might feel you can skip this. Don’t.
You’re probably doing 80% of what follows and missing the other 20%. The gates below provide measurable proof that your content reached the index with maximum confidence, giving it the best possible chance in the competitive ARGDW phase that follows.
Sequential dependency: Fix the earliest failure first
The infrastructure gates are sequential dependencies: each gate’s output is the next gate’s input, and failure at any gate blocks everything downstream.
If your content isn’t being discovered, fixing your rendering is wasted effort, and if your content is crawled but renders poorly, every annotation downstream inherits that degradation. Better to be a straight C student than three As and an F, because the F is the gate that kills your pipeline.
The audit starts with discovery and moves forward. The temptation to jump to the gate you understand best (and for many technical SEOs, that’s crawling) is the temptation that wastes the most money.
Discovery, selection, crawling: The three gates the industry already knows
Discovery and crawling are well-understood, while selection is often overlooked.
Discovery is an active signal. Three mechanisms feed it:
XML sitemaps (the census).
IndexNow (the telegraph).
Internal linking (the road network).
The entity home website is the primary discovery anchor for pull discovery, and confidence is key. The system asks not just “does this URL exist?” but “does this URL belong to an entity I already trust?” Content without entity association arrives as an orphan, and orphans wait at the back of the queue.
The push layer (IndexNow, MCP, structured feeds) changes the economics of this gate entirely, and I’ll explain what changes when you stop waiting to be found and start pushing.
Selection is the system’s opinion of you, expressed as crawl budget. As Microsoft Bing’s Fabrice Canel says, “Less is more for SEO. Never forget that. Less URLs to crawl, better for SEO.”
The industry spent two decades believing more pages equals more traffic. In the pipeline model, the opposite is true: fewer, higher-confidence pages get crawled faster, rendered more reliably, and indexed more completely. Every low-value URL you ask the system to crawl is a vote of no confidence in your own content, and the system notices.
Not every page that’s discovered in the pull model is selected. Canel states that the bot assesses the expected value of the destination page and will not crawl the URL if the value falls below a threshold.
Crawling is the most mature gate and the least differentiating. Server response time, robots.txt, redirect chains: solved problems with excellent tooling, and not where the wins are because you and most of your competition have been doing this for years.
What most practitioners miss, and what’s worth thinking about: Canel confirmed that context from the referring page carries forward during crawling.
Your internal linking architecture isn’t just a crawl pathway (getting the bot to the page) but a context pipeline (telling the bot what to expect when it arrives), and that context influences selection and then interpretation at rendering before the rendering engine even starts.
Rendering fidelity: The gate that determines what the bot sees
Rendering fidelity is where the infrastructure story diverges from what the industry has been measuring.
After crawling, the bot attempts to build the full page. It sometimes executes JavaScript (don’t count on this because the bot doesn’t always invest the resources to do so), constructs the document object model (DOM), and produces the rendered DOM.
I coined the term rendering fidelity to name this variable: how much of your published content the bot actually sees after building the page. Content behind client-side rendering that the bot never executes isn’t degraded, it’s gone, and information the bot never sees can’t be recovered at any downstream gate.
Every annotation, every grounding decision, every display outcome depends on what survived rendering. If rendering is your weakest gate, it’s your F on the report card, and remember: everything downstream inherits that grade.
The friction hierarchy: Why the bot renders some sites more carefully than others
The bot’s willingness to invest in rendering your page isn’t uniform. Canel confirmed that the more common a pattern is, the less friction the bot encounters.
I’ve reconstructed the following hierarchy from his observations. The ranking is my model. The underlying principle (pattern familiarity reduces selection, crawl, rendering, and indexing friction and processing cost) is confirmed:
Approach
Friction level
Why
WordPress + Gutenberg + clean theme
Lowest
30%+ of the web. Most common pattern. Bot has highest confidence in its own parsing.
Established platforms (Wix, Duda, Squarespace)
Low
Known patterns, predictable structure. Bot has learned these templates.
WordPress + page builders (Elementor, Divi)
Medium
Adds markup noise. Downstream processing has to work harder to find core content.
Bespoke code, perfect HTML5
Medium-High
Bot does not know your code is perfect. It has to infer structure without a pattern library to validate against.
Bespoke code, imperfect HTML5
High
Guessing with degraded signals.
The critical implication, also from Canel, is that if the site isn’t important enough (low publisher entity authority), the bot may never reach rendering because the cost of parsing unfamiliar code exceeds the estimated benefit of obtaining the content. Publisher entity confidence has a huge influence on whether you get crawled and also how carefully you get rendered (and everything else downstream).
JavaScript is the most common rendering obstacle, but it isn’t the only one: missing CSS, proprietary elements, and complex third-party dependencies can all produce the same result — a bot that sees a degraded version of what a human sees, or can’t render the page at all.
JavaScript was a favor, not a standard
Google and Bing render JavaScript. Most AI agent bots don’t. They fetch the initial HTML and work with that. The industry built on Google and Bing’s favor and assumed it was a standard.
Perplexity’s grounding fetches work primarily with server-rendered content. Smaller AI agent bots have no rendering infrastructure.
The practical consequence: a page that loads a product comparison table via JavaScript displays perfectly in a browser but renders as an empty container for a bot that doesn’t execute JS. The human sees a detailed comparison. The bot sees a div with a loading spinner.
The annotation system classifies the page based on an empty space where the content should be. I’ve seen this pattern repeatedly in our database: different systems see different versions of the same page because rendering fidelity varies by bot.
Three rendering pathways that bypass the JavaScript problem
The traditional rendering model assumes one pathway: HTML to DOM construction. You now have two alternatives.
WebMCP, built by Google and Microsoft, gives agents direct DOM access, bypassing the traditional rendering pipeline entirely. Instead of fetching your HTML and building the page, the agent accesses a structured representation of your DOM through a protocol connection.
With WebMCP, you give yourself a huge advantage because the bot doesn’t need to execute JavaScript or guess at your layout, because the structured DOM is served directly.
Markdown for Agents uses HTTP content negotiation to serve pre-simplified content. When the bot identifies itself, the server delivers a clean markdown version instead of the full HTML page.
The semantic content arrives pre-stripped of everything the bot would have to remove anyway (navigation, sidebars, JavaScript widgets), which means the rendering gate is effectively skipped with zero information loss. If you’re using Cloudflare, you have an easy implementation that they launched in early 2026.
Both alternatives change the economics of rendering fidelity in the same way that structured feeds change discovery: they replace a lossy process with a clean one.
For non-Google bots, try this: disable JavaScript in your browser and look at your page, because what you see is what most AI agent bots see. You can fix the JavaScript issue with server-side rendering (SSR) or static site generation (SSG), so the initial HTML contains the complete semantic content regardless of whether the bot executes JavaScript.
But the real opportunity lies in new pathways: one architectural investment in WebMCP or Markdown for Agents, and every bot benefits regardless of its rendering capabilities.
Rendering produces a DOM. Indexing transforms that DOM into the system’s proprietary internal format and stores it. Two things happen here that the industry has collapsed into one word.
Rendering fidelity (Gate 3) measures whether the bot saw your content. Conversion fidelity (Gate 4) measures whether the system preserved it accurately when filing it away. Both losses are irreversible, but they fail differently and require different fixes.
The strip, chunk, convert, and store sequence
What follows is a mechanical model I’ve reconstructed from confirmed statements by Canel and Gary Illyes.
Strip: The system removes repeating elements: navigation, header, footer, and sidebar. Canel confirmed directly that these aren’t stored per page.
The system’s primary goal is to find the core content. This is why semantic HTML5 matters at a mechanical level. <nav>, <header>, <footer>, <aside>, <main>, and <article> tags tell the system where to cut. Without semantic markup, it has to guess.
Illyes confirmed at BrightonSEO in 2017 that finding core content at scale was one of the hardest problems they faced.
Chunk: The core content is broken into segments: text blocks, images with associated text, video, and audio. Illyes described the result as something like a folder with subfolders, each containing a typed chunk (he probably used the term “passage” — potato, potarto, tomato, tomarto). The page becomes a hierarchical structure of typed content blocks.
Convert: Each chunk is transformed into the system’s proprietary internal format. This is where semantic relationships between elements are most vulnerable to loss.
The internal format preserves what the conversion process recognizes, and everything else is discarded.
Store: The converted chunks are stored in a hierarchical structure.
The individual steps are confirmed. The specific sequence and the wrapper hierarchy model are my reconstruction of how those confirmed pieces fit together.
In this model, the repeating elements stripped in the first step are not discarded but stored at the appropriate wrapper level: navigation at site level, category elements at category level. The system avoids redundancy by storing shared elements once at the highest applicable level.
Like my “Darwinism in search” piece from 2019, this is a well-informed, educated guess. And I’m confident it will prove to be substantively correct.
The wrapper hierarchy changes three things you already do:
URL structure and categorization: Because each page inherits context from its parent category wrapper, URL structure determines what topical context every child page receives during annotation (the first gate in the phase I’ll cover in the next article: ARGDW).
A page at /seo/technical/rendering/ inherits three layers of topical context before the annotation system reads a single word. A page at /blog/post-47/ inherits one generic layer. Flat URL structures and miscategorized pages create annotation problems that might appear to be content problems.
Breadcrumbs validate that the page’s position in the wrapper hierarchy matches the physical URL structure (i.e., match = confidence, mismatch = friction). Breadcrumbs matter even when users ignore them because they’re a structural integrity signal for the wrapper hierarchy.
Meta descriptions: Google’s Martin Splitt suggested in a webinar with me that the meta description is compared to the system’s own LLM-generated summary of the page. If they match, a slight confidence boost. If they diverge, no penalty, but a missed validation opportunity.
Where conversion fidelity fails
Conversion fidelity fails when the system can’t figure out which parts of your page are core content, when your structure doesn’t chunk cleanly, or when semantic relationships fail to survive format conversion.
The critical downstream consequence that I believe almost everyone is missing: indexing and annotation are separate processes.
A page can be indexed but poorly annotated (stored but semantically misclassified). I’ve watched it happen in our database: a page is indexed, it’s recruited by the algorithmic trinity, and yet the entity still gets misrepresented in AI responses because the annotation was wrong.
The page was there. The system read it. But it read a degraded version (rendering fidelity loss at Gate 3, conversion fidelity loss at Gate 4) and filed it in the wrong drawer (annotation failure at Gate 5).
Processing investment: Crawl budget was only the beginning
The industry built an entire sub-discipline around crawl budget. That’s important, but once you break the pipeline into its five DSCRI gates, you see that it’s just one piece of a larger set of parameters: every gate consumes computational resources, and the system allocates those resources based on expected return. This is my generalization of a principle Canel confirmed at the crawl level.
Gate
Budget type
What the system asks
1 (Selected)
Crawl budget
“Is this URL a candidate for fetching?”
2 (Crawled)
Fetch budget
“Is this URL worth fetching?”
3 (Rendered)
Render budget
“Is this page a candidate for rendering?”
4 (Indexed)
Chunking/conversion budget
“Is this content worth carefully decomposing?”
5 (Annotated)
Annotation budget
“Is this content worth classifying across all dimensions?”
Each budget is governed by multiple factors:
Publisher entity authority (overall trust).
Topical authority (trust in the specific topic the content addresses).
Technical complexity.
The system’s own ROI calculation against everything else competing for the same resource.
The system isn’t just deciding whether to process but how much to invest. The bot may crawl you but render cheaply, render fully but chunk lazily, or chunk carefully but annotate shallowly (fewer dimensions). Degradation can occur at any gate, and the crawl budget is just one example of a general principle.
Structured data: The native language of the infrastructure gates
The SEO industry’s misconceptions about structured data run the full spectrum:
The magic bullet camp that treats schema as the only thing they need.
The sticky plaster camp that applies markup to broken pages, hoping it compensates for what the content fails to communicate.
The ignore-it-entirely camp that finds it too complicated or simply doesn’t believe it moves the needle.
None of those positions is quite right.
Structured data isn’t necessary. The system can — and does — classify content without it. But it’s helpful in the same way the meta description is: it confirms what the system already suspects, reduces ambiguity, and builds confidence.
The catch, also like the meta description, is that it only works if it’s consistent with the page. Schema that contradicts the content doesn’t just fail to help: it introduces a conflict the system has to resolve, and the resolution rarely favors the markup.
When the bot crawls your page, structured data requires no rendering, interpretation, or language model to extract meaning. It arrives in the format the system already speaks: explicit entity declarations, typed relationships, and canonical identifiers.
In my model, this makes structured data the lowest-friction input the system processes, and I believe it’s processed before unstructured content because it’s machine-readable by design. Semantic HTML tells the system which parts carry the primary semantic load, and semantic structure is what survives the strip-and-chunk process best because it maps directly to the internal representation.
Schema at indexing works the same way: instead of requiring the annotation system to infer entity associations and content types from unstructured text, schema declares them explicitly, like a meta description confirming what the page summary already suggested.
The system compares, finds consistency, and confidence rises. The entire pipeline is a confidence preservation exercise: pass each gate and carry as much confidence forward as possible. Schema is one of the cleaner tools for protecting that confidence through the infrastructure phase.
That said, Canel noted that Microsoft has reduced its reliance on schema. The reasons are worth understanding:
Schema is often poorly written.
It has attracted spam at a scale reminiscent of keyword stuffing 25 years ago.
Small language models are increasingly reliable at inferring what schema used to need to declare explicitly.
Schema’s value isn’t disappearing, but it’s shifting: the signal matters most where the system’s own inference is weakest, and least where the content is already clean, well-structured, and unambiguous.
Schema and HTML5 have been part of my work since 2015, and I’ve written extensively about them over the years. But I’ve always seen structured data as one tool among many for educating the algorithms, not the answer in itself. That distinction matters enormously.
Brand is the key, and for me, always has been.
Without brand, all the structured data in the world won’t save you. The system needs to know who you are before it can make sense of what you’re telling it about yourself.
Schema describes the entity and brand establishes that the entity is worth describing. Get that order wrong, and you’re decorating a house the system hasn’t decided to visit yet.
The practical reframe: structured data implementation belongs in the infrastructure audit, and it’s the format that makes feeds and agent data possible in the first place. But it’s a confirmation layer, not a foundation, and the system will trust its own reading over yours if the two diverge.
Why improve infrastructure when you can skip them entirely?
The multiplicative nature of the pipeline means the same logic that makes your weakest gate your biggest problem also makes gate-skipping your biggest opportunity.
If every gate attenuates confidence, removing a gate entirely doesn’t just save you from one failure mode: it removes that gate’s attenuation from the equation permanently.
To make that concrete, here’s what the math looks like across seven approaches. The base case assumes 70% confidence at every gate, producing a 16.8% surviving signal across all five in DSCRI. Where an approach improves a gate, I’ve used 75% as the illustrative uplift.
These are invented numbers, not measurements. The point is the relative improvement, not the figures themselves.
Approach
What changes
Entering ARGDW with
Pull (crawl)
Nothing
16.8%
Schema markup
I → 75%
18.0%
WebMCP
R skipped
24.0%
IndexNow
D skipped, S → 75%
25.7%
IndexNow + WebMCP
D skipped, S → 75%, R skipped
36.8%
Feed (Merchant Center, Product Feed)
D, S, C, R skipped
70.0%
MCP (direct agent data)
D, S, C, R, I skipped
100%
The infrastructure phase is pre-competitive. The annotation, recruited, grounded, displayed, and won (ARGDW) gates are where your content competes against every alternative the system has indexed. Competition is multiplicative too, so what you carry into annotation is what gets multiplied.
A brand that navigated all five DSCRI gates with 70% enters the competitive phase with 16.8% confidence intact. A brand on a feed enters with 70%. A brand on MCP enters with 100%. The competitive phase hasn’t started yet, and the gap is already that wide.
There’s an asymmetry worth naming here. Getting through a DSCRI gate with a strong score is largely within your control: the thresholds are technical, the failure modes are known, and the fixes have playbooks.
Getting through an ARGDW gate with a strong score depends on how you compare to all the alternatives in the system. The playbooks are less well developed, some don’t exist at all (annotation, for example), and you can’t control the comparison directly — you can only influence it.
Which means the confidence you carry into annotation is the only part of the competitive phase you can fully engineer in advance.
Optimizing your crawl path with schema, WebMCP, IndexNow, or combinations of all three will move the needle, and the table above shows by how much. But a feed or MCP connection changes what game you’re playing.
Every content type benefits from skipping gates, but the benefit scales with the business stakes at the end of the pipeline, and nothing has more at stake than content where the end goal is a commercial transaction.
The MCP figure represents the best case for the DSCRI phase: direct data availability bypasses all five infrastructure gates. In practice, the number of gates skipped depends on what the MCP connection provides and how the specific platform processes it. The principle holds: every gate skipped is an exclusion risk avoided and potential attenuation removed before competition starts.
A product feed is only the first rung. Andrea Volpini walked me through the full capability ladder for agent readiness:
A feed gives the system inventory presence (it knows what exists).
A search tool gives the agent catalog operability (it can search and filter without visiting the website).
An action endpoint tips the model from assistive to agentic — the agent doesn’t just recommend the transaction, it closes it.
That distinction is what I built AI assistive agent optimization (AAO) around: engineering the conditions for an agent to act on your behalf, not just mention you.
Volpini’s ladder makes the mechanic concrete: each rung skips more gates, removes more exclusion risk, and eliminates more potential attenuation before competition starts. A brand with all three is playing a different game from a brand that’s still waiting for a bot to crawl its product pages.
Note: Always keep this in mind when optimizing your site and content — make your content friction-free for bots and tasty for algorithms.
DSCRI are absolute tests, ARGDW are competitive tests. The pivot is annotation.
Five gates. Five absolute tests. Pass or fail (and a degrading signal even on pass).
The solutions are well documented:
Discovery failures fixed with sitemaps and IndexNow.
Selection failures with pruning and entity signal clarity.
Crawling failures with server configuration.
Rendering failures with server-side rendering or the new pathways that bypass the problem entirely.
Indexing failures with semantic HTML, canonical management, and structured data.
The infrastructure phase is the only phase with a playbook, and opportunity cost is the cheapest failure pattern to fix.
But DSCRI is only half the pipeline, and it’s the easiest to deal with.
After indexing, the scoreboard turns on. The five competitive gates (ARGDW) are competitive tests: your content doesn’t just need to pass, it needs to beat the competition. What your content carries into the kickoff stage of those competitive gates is what survived DSCRI. And the entry gate to ARGDW is annotation.
The next piece opens annotation: the gate the industry has barely begun to address. It’s where the system attaches sticky notes to your indexed content across 24+ dimensions, and every algorithm in the ARGDW phase uses those notes to decide what your content means, who it’s for, and whether it deserves to be recruited, grounded, displayed, and recommended.
Those sticky notes are the be-all and end-all of your competitive position, and almost nobody knows they exist.
In “How the Bing Q&A / Featured Snippet Algorithm Works,” in a section I titled “Annotations are key,” I explained what Ali Alvi told me on my podcast, “Fabrice and his team do some really amazing work that we actually absolutely rely on.”
He went further: without Canel’s annotations, Bing couldn’t build the algos to generate Q&A at all. A senior Microsoft engineer, on the record, in plain language.
The evidence trail has been there for six years. That, for me, makes annotation the biggest untapped opportunity in search, assistive, and agential optimization right now.
This is the third piece in my AI authority series.
When people speak naturally, their language flows. It’s often messy, incomplete, and not especially coherent. The Google search bar, however, required something different. Users had to compress their needs into short phrases or slightly longer queries — what’s traditionally classified as short-tail or long-tail.
To make that work, users stacked queries across a journey, moving through a funnel from A to B and refining as they went. In the process, users often stripped out personalized nuance to match what they believed the search engine could understand. In response, SEO professionals built systems around that constraint, grouping queries by search volume, categorizing them by a limited set of intents, and measuring competitiveness.
That dynamic is changing. SEOs need to understand the behavioral change that’s emerging. Google is promoting Gemini, and phone manufacturers like Samsung are marketing AI-enabled features as product USPs. Alongside this product marketing, there’s also a level of education happening. Users are being encouraged to be more expressive with their queries, personalize their searches, and describe what they’re looking for in greater depth.
Moving from keyword research to prompt research
This is where we need to move away from the notion of keyword research to prompt research. Keyword research traditionally assumes that demand can be quantified, that variations can be listed and grouped, and that optimization happens at a phrase level or a cluster level. In the new hybrid AI and organic search world, demand is much more of a generative concept. Prompts can be written in countless ways while preserving the same underlying need.
This doesn’t make keyword research obsolete, but it does change its focus. Instead of extracting keywords from tools as we’ve done, we also need to start understanding and modeling journeys. Instead of grouping by volume alone, we need to group by decision stage and the type and level of uncertainty the user has.
The output of this process isn’t simply a keyword map, but a task map that accurately reflects the real pressures and constraints experienced by the audience. This is an evolution from short-tail and long-tail keyword research to an infinite tail of prompt research.
You can describe the infinite tail as an expansion of the long tail. But that underestimates what’s actually changing. It’s not just about more niche phrases or longer query strings. It’s about the level of personalization that’s been layered into each request.
As users add context, constraints, and preferences, prompts become unique combinations of a multitude of factors. The number of possible combinations effectively becomes infinite, even if the underlying tasks remain finite. AI systems respond by evaluating the given prompts and probabilistically predicting the next tokens rather than using exact-match strings.
It’s less about how you rank for a specific keyword or whether you’re visible in AI for a specific phrase. It becomes whether your content has the highest probability of satisfying the situation being described. That’s a different optimization problem altogether. You’re not competing on phrasing. You’re competing on task completion.
This part of the journey is where “fuzzy searches” happen, meaning the path isn’t a straight line. Success isn’t just about finishing a task. It’s about making sure the user actually found what they were looking for. Since every user moves differently, the process is flexible rather than a set of rigid steps.
One of the most important mechanics in AI search is query fan-out. When a complex prompt is submitted, the system doesn’t treat it as a single string. Instead, it decomposes a request into a network of subquestions, classifications, and checks that together form a broader evaluation framework.
From an SEO perspective, this means your content moves beyond evaluation against a single phrase or specific document matches. Instead, it’s assessed across a network of related questions, with a collective determination of whether it can satisfy a broader task.
In a fan-out world, you win by supporting the entire decision cluster that surrounds that term. If your content addresses only one narrow dimension of the task, it becomes fragile. If it supports multiple layers of the decision, it becomes resilient. Fan-out rewards structural coverage and contextual relevance rather than repetition of specific phrases.
Grounding queries help provide the LLM with a level of confidence through its fan-created queries. AI systems generate answers and attempt to validate them.
They’re used to check whether a proposed answer is supported elsewhere, whether claims are consistent across sources, and whether the entity behind the information is reputable. If an AI system includes your brand in a summarized response, it needs a level of confidence to defend it virtually if challenged by alternative information.
This changes the meaning of authority. In traditional SEO, ranking could be achieved through technical content, links, and other forms of manipulation. In AI search, selection also depends on how easily your content can be corroborated against a broader consensus within the cohort. This can involve factors tied to entity clarity, including structure, data consistency, consistent messaging, and external validation. These signals reduce uncertainty for the system. You’re not just trying to appear. You’re trying to be selected and defended.
Organic search isn’t disappearing. Ranking still influences discovery, technical SEO still shapes crawlability, and architecture still determines how well a site and its content are understood.
But now, AI layers sit on top, synthesizing information and influencing which brands are surfaced within conversational responses. In this hybrid environment, organic visibility feeds AI selection. They aren’t exclusive, and yet they aren’t codependent.
AI selection can reinforce brand perception, and fan-out rewards depth of current coverage. Grounding then rewards trust and consistency. This is where the infinite tail rewards genuine audience understanding and the creation of websites and content systems that support it.
This is a shift from keyword research to prompt research, and not just a cosmetic renaming of the process. Success will depend on understanding why people search, the decisions they’re making, the uncertainties they face, and the evidence they need before committing. Search increasingly revolves around satisfying situations rather than matching strings. Designing for the infinite tail means designing for people and the tasks they’re trying to complete.
“Content is king” remains one of the most widely accepted ideas in SEO. Not everyone has agreed. Different schools of thought have always existed, with some practitioners prioritizing backlinks and others focusing on technical SEO.
Content is often treated as the primary driver of search visibility. I’m not arguing that.
My point is simpler: if you’ve relied on content to drive results — and earn a living — you should start doubling down on distribution.
With AI search changing the game, creating great content (and, yes, building some backlinks) is no longer enough to get it seen. The more important question may no longer be “What should I write next?” but “Where should I push this next?”
AI tools are further fragmenting search
Content distribution has become far more important in recent years, especially as audiences spread across more online spaces. In many teams, this job was usually outsourced to someone other than SEOs:
Social media managers.
Community managers.
PR specialists.
Various assistants and interns.
Sure, distribution held some value to SEO, but it was generally considered more beneficial to other functions.
Thanks to AI search, it’s finally landed squarely on our plate. Since AI models have fragmented search to an unprecedented level, distribution is now key to meaningful SEO outcomes.
There are three key drivers behind this change:
Different tools have different sourcing logic.
AI tools source differently from traditional search.
Their logic is changeable.
If this all sounds a bit abstract, let’s briefly dig into the evidence and explain what’s really going on.
Different tools have different sourcing logic
Search is fragmenting as people use a wider range of tools. Ideally, one strategy would work everywhere, but research shows that’s not the case.
AI search tools cite different sources, a 2025 Search Atlas study found. Some show significantly more overlap with the SERPs than others. This indicates that different tools follow different sourcing logic. And as long as that’s true, optimizing for one won’t necessarily boost visibility on another.
The whole thing is even trickier because users seem more open to switching tools than before. Gemini may soon surpass formerly unrivaled ChatGPT in traffic share, according to Similarweb. That could change again quickly.
Thinking there’s a single clear winner, like Google used to be, would be wrong. Focusing on the most popular tool at the moment isn’t a guaranteed strategy.
To maximize visibility, we need to consider how multiple AI tools source their information, which implies our distribution strategy needs to be broad.
AI search uses different logic from traditional search
The Search Atlas study showed that some AI search tools overlap with Google more than others — but in all cases, the overlap is pretty low. Perplexity ranked the highest at 43%, while ChatGPT barely hit 21%.
Characterizing Web Search in The Age of Generative AI (PDF) explicitly finds that AI search tools draw from a much wider pool of sources and are more likely to cite sites with fewer visits than traditional search engines.
This shows us that fragmentation is compounding. The pool of potential sources is wider, with little overlap among AI tools or between AI and traditional search.
The sourcing logic is changeable
The most problematic factor out of all, though, is that the sourcing logic of one tool can and often does change, too. This leads to different domains getting cited for the same prompts at different points in time — a phenomenon called citation drift.
Citation drift is more frequent than we might assume. Over the course of just a month, for instance, AI tools change approximately 40-60% of the domains they cite for the same prompt, according to Profound.
In other words, one domain could appear several times in a single response, then disappear completely the following month. This flip-flopping gets even worse over longer periods. For example, Profound’s study also showed that, from January to July, as many as 70% to 90% of the domains cited for the same prompt had changed.
Search is fragmented across tools and time. As cited domains change more frequently, users see more sources, making it even harder for you to push your brand to the front.
So, what can we do about it? How should we approach this increasing fragmentation of search?
While this might change as new tools and strategies emerge, the best answer we have so far is this: focus on broad, multi-channel distribution.
When you can’t reliably predict which sources will be used, the best strategy is to widen your footprint. This creates more potential entry points into AI systems’ training and retrieval layers.
This will require some serious shifts in how many SEOs approach their work. Here are a few you can implement right away.
1. Get good at collaborating
You’re unlikely to win fragmented AI search on your own. Optimizing for it now takes a much broader approach than before, pulling in digital PR, social media, community management, and other functions.
Those areas require skills many SEOs don’t have. Those who do still have only 24 hours in a day, so spreading that work across multiple disciplines isn’t realistic.
This only works with a team. You might hate that idea, especially because it means giving up full control of your projects and results. I get it, but that’s the reality right now. You’ll have to let some things go, trust others to handle them, and divide responsibilities. In other words, you’ll need to collaborate efficiently.
Even if you let experts handle certain tasks, you’ll still need at least a surface-level understanding of other disciplines becoming central to search.
SEOs will still own at least parts of distribution, whether that means handling the high-level strategy or downright executing it on specific channels.
In either case, doing this well requires skills you may not have used much before. So now’s the time to develop them.
That could mean learning more about digital PR, partnerships, thought leadership, syndication, community presence, or something else. With so many possibilities, it helps to start with the area you feel most comfortable with or most drawn to at the moment.
3. Shift your mindset from ranking to presence
You also need to change how you think about SEO, and then translate that shift into actual workflows. Google is still a major traffic driver, and rankings still matter. But for a fragmented, AI-driven search, obsessing over rank won’t cut it.
Instead of asking, “How do I get this content to rank?” You now need to ask, “How do I get this content into as many places as possible?”
Again, the goal is to create multiple entry points across AI systems, platforms, and audiences, increasing the chances of your content getting discovered, cited, and surfaced.
That’s why it’s important to start thinking more about overall presence across ecosystems rather than just positions in specific search engines.
4. Redesign your workflow
If you’ve successfully shifted your mindset from ranking to presence, it’s time to build a workflow that reflects that change.
I know firsthand how easy it is to forget about distribution, especially if it wasn’t part of your process before. To make it stick, you need to redesign your workflow to position distribution at the core.
A good place to start is by adding a launch phase, where content is distributed immediately upon publishing. After that, you could include a recurring phase every few months to ensure you regularly refresh and redistribute content.
Define reusable details upfront, like which channels you’ll consistently target and who owns each one. That way, you’ll minimize planning from scratch and make sure nothing falls through the cracks.
5. Start with these easy-to-implement best practices
Finally, if you want some easy tactics to immediately add to your to-do list, consider these:
Pilot content partnerships, starting where it’s easiest. Usually, that implies reaching out to existing business partners first.
Proactively distribute your content on third-party sites, whether that means syndicating it or repurposing it for Quora and LinkedIn.
Pay attention to where AI tools already pull from. While sourcing logic changes constantly, you may still notice recurring patterns worth leveraging.
Give a special push to your existing, older content to counteract the pitfalls of citation drift. Reintroduce it on new channels, or work to get it referenced in new places.
The shifts are large enough that you’ll need to rethink how you do SEO. As search fragments, the work itself will have to evolve.
The approaches and workflows you relied on in the past won’t translate cleanly into a landscape shaped by multiple AI tools, changing sourcing logic, and constantly shifting citations.
These processes will also become more complex because they require closer collaboration with other teams. Distribution now intersects with digital PR, social media, partnerships, and community management, making cross-team coordination more important than before.
There’s a long road ahead. The best way to keep your sanity is to start small: focus on manageable steps, take them one at a time, and build from there.
If you’ve been in marketing long enough, you’ve probably lived through a few identity crises. First, we were channel experts. Then, we became integrated marketers, growth marketers, and performance marketers. Somewhere along the way, someone added “AI” to everyone’s job description and called it a day.
Now, we’re entering the era of the full-stack marketer. From where I sit — particularly as a media leader — the role is starting to look a lot like product management.
This doesn’t mean you need to start writing Jira tickets for fun (though some of you already do). It means that tomorrow’s most effective media leaders won’t just optimize campaigns. They’ll own outcomes, connect dots across teams, and think holistically about the entire user experience, from first impression to final conversion (and beyond).
I’ve seen this shift most clearly in industries with long consideration cycles, multiple stakeholders, and rising acquisition costs — where marketing performance is inseparable from the experience itself.
Let’s break down what’s driving the rise of the full-stack marketer, what it really means to “think like a product manager,” and why this mindset is becoming non-negotiable for media leaders.
What is a full-stack marketer, anyway?
A full-stack marketer isn’t someone who does everything (burnout isn’t a job requirement). Instead, it’s someone who understands how everything works together.
Over the course of my career, I’ve learned that the most impactful media decisions rarely come from being the deepest expert in one area. They come from having working fluency across many:
Media and channels: Paid search, paid social, programmatic, CTV, SEO, email, SMS, and whatever new acronym launches next quarter.
Creative and messaging: Knowing what resonates, where, and why.
Data and analytics: Not just reading dashboards, but asking better questions of the data.
UX and CRO: Understanding friction, intent, and user behavior.
Technology and platforms: CRMs, CMSs, marketing automation, and attribution tools.
The full-stack marketer doesn’t need to be the deepest expert in every area, but they do need to know enough to connect insights, spot gaps, and make informed trade-offs. In practice, this means constantly zooming out to see the system and zooming back in when something breaks.
Why media leaders are evolving into product thinkers
Earlier in my career, media leadership was often defined by questions like:
Are we hitting CPA targets?
Which channels are driving the most conversions?
How do we allocate budget more efficiently?
Those questions still matter. I ask them all the time. But over the years, I’ve learned they’re no longer sufficient on their own. Today’s environment forces media leaders to grapple with bigger, messier questions:
Why are conversion rates declining even when traffic is strong?
Where are prospects dropping out of the funnel, and why?
How does media performance change when the application experience changes?
What happens after the lead submits?
These are product questions. Product managers obsess over the end-to-end experience: the user journey, friction points, trade-offs, and outcomes. Media leaders who adopt this mindset stop seeing campaigns as isolated efforts and start seeing them as inputs into a broader system.
In many of the industries I’ve worked in, that system is anything but simple.
Marketing performance rarely exists in isolation. In many industries (especially those with longer decision cycles), a click is just the beginning, not the win.
Whether you’re selling financial services, healthcare, or education, prospects move through nonlinear journeys influenced by multiple touchpoints, stakeholders, and moments of friction. This is where full-stack thinking becomes critical.
Example 1: When media isn’t the problem, the experience is
I’ve lost count of how many times I’ve heard this reaction when performance starts slipping: “The platform is getting more expensive.”
Sometimes that’s true. But a product-minded media leader asks deeper questions:
Has the conversion experience changed recently?
Did we add steps, fields, or requirements?
Are we driving mobile traffic to a hostile desktop experience?
Across industries, I’ve repeatedly seen strong intent at the keyword or audience level, healthy CTRs, and solid landing-page engagement followed by a steep drop-off at the point of conversion. It’s a product experience problem.
In higher ed, this often shows up when high-intent program traffic is routed to lengthy or confusing application flows, generic inquiry forms, or experiences that don’t match the promise of the ad, especially on mobile. Prospective students signal strong intent, only to hit friction that has nothing to do with media and everything to do with the experience they’re asked to navigate.
A full-stack marketer doesn’t just flag this: they bring data, partner cross-functionally, and help prioritize fixes based on impact.
Example 2: Different audiences, different ‘products’
One of the most important product principles is that not all users are the same, and they shouldn’t be treated that way.
Many organizations market to multiple audiences at once, each with different motivations, risk tolerance, and timelines. Treating them as if they’re buying the same “thing” is a fast track to average results.
A product-minded media leader understands that:
The value proposition changes by audience.
The conversion event may be different.
The decision timeline is almost certainly different.
I’ve seen this clearly in healthcare, where patients, caregivers, and referring providers evaluate the same organization through entirely different lenses. Financial services presents a similar challenge, with banking, investment, and insurance decisions varying dramatically by life stage and goals.
Full-stack marketers adapt media strategy accordingly, from channel mix to messaging to measurement. This is because they understand product-market fit, not just audience targeting.
Example 3: What happens after the conversion
One of the biggest blind spots in media strategy is what happens after someone converts. Product thinkers ask:
How quickly does someone follow up?
Is the first touch personalized or generic?
Does the message align with the promise of the ad?
I’ve seen performance improve without changing media at all, simply by improving speed-to-lead or aligning follow-up messaging with campaign intent.
Healthcare offers especially clear examples of this dynamic due to intake workflows, appointment scheduling, and care coordination, but the principle is universal: media doesn’t end at the form fill. The full-stack marketer is accountable for conversions and outcomes.
Another hallmark of product management is roadmap thinking: prioritizing initiatives based on impact, effort, and sequencing. Full-stack media leaders bring this same approach to marketing:
Short-term wins versus long-term bets.
Testing frameworks instead of one-off experiments.
Phase 3: Layer in audience-based creative and messaging.
Instead of chasing the “next shiny channel,” full-stack marketers focus on compounding gains.
Data fluency: Asking better questions
Product managers don’t just look at metrics. They interrogate them. The same should be true for media leaders. Instead of asking, “What’s the CPA?” I’ve learned to ask:
“Which segments are converting efficiently, and which aren’t?”
“How does performance differ by device, geography, or life stage?”
“What signals indicate readiness vs. research?”
In higher ed, this might mean:
Separating brand vs. non-brand intent.
Looking at assisted conversions.
Evaluating performance by program.
Data becomes a tool for decision-making.
Collaboration is the new superpower
Full-stack marketers are inherently collaborative because they have to be. In higher ed, success often requires alignment across:
Admissions.
Enrollment marketing.
IT and web teams.
Academic leadership.
External partners.
Media leaders who think like product managers don’t just execute requests. They help stakeholders understand trade-offs, prioritize initiatives, and rally around shared goals. They also translate data into stories people can act on.
So, what does this mean for tomorrow’s media leaders?
The rise of the full-stack marketer doesn’t mean specialization is dead. It means seeing the entire system matters more than optimizing any single piece of it.
From my perspective, tomorrow’s strongest media leaders will:
Understand the business behind the campaign.
Think beyond their channel.
Advocate for the user experience.
Use data to inform and influence.
Embrace ambiguity (and occasionally chaos).
In categories where trust, timing, and transformation are at the core of the “product,” this mindset is no longer optional.
At its heart, marketing here is more than campaigns. It’s guiding life-changing choices. If you’re a media leader feeling like your role is expanding faster than your job description — congratulations! You’re not losing focus. You’re evolving.
Buying AI capabilities to drive marketing is easy. Enabling marketing teams to actually use it independently, decisively, and at scale is far harder.
The main culprit? Humans.
Marketing teams have always had the same elusive goal: to move at the pace of the consumer. Responding to each customer’s needs in real time, delivering the relevant message at the right moment, and optimizing customer lifetime value to drive loyalty and ROI. The goal is not new.
What is perpetually new are the AI technologies available to analyze consumer data and generate instant, personalized messaging at scale. But while technology evolves rapidly, the ability of marketing teams to harness it independently and decisively has not kept pace. The main obstacle is organizational: most marketing teams have not structured themselves to extract full value from the technology they already have.
This is not to say that there is no progress. There is. Marketing teams that have crossed that chasm are seeing extraordinary results.
One case in point is Caesars Entertainment that reduced campaign execution time from five days to five minutes. Asadul Shah, vice president of player revenue Strategy, called it “a massive game changer.”
Before that transformation, Caesars marketers manually built targeting lists across disconnected systems, coordinated across multiple tools and waited on engineers, analysts and creative teams before anything could go out. The result was an operation too slow to target players with the precision and timing the market demanded.
Caesars worked with Optimove to consolidate data, orchestration and execution in one platform. Shah noted the transformation made marketing “not just more efficient; it is more responsive to what our players actually need in the moment.”
What made it work was not technology alone. Caesars implemented Positionless Marketing, a framework that frees marketing teams from fixed roles, giving every marketer the power to execute any task instantly and independently. Optimove provided the platform. Caesars built the team structure to make it real. Technology and human ingenuity working together making Positionless Marketing possible.
Any organization achieving this kind of transformation is doing what McKinsey calls “organizing to value,” a fundamental rethink of structure, decision-making and accountability that turns a marketing team into an operation built to drive value continuously. For marketing, that means becoming a Positionless team that optimizes customer lifetime value, drives loyalty and delivers measurable ROI.Below, we use McKinsey’s Organize to Value framework to outline the pitfalls that block Positionless Marketing and the blueprint to build teams that can execute any marketing task, instantly and independently.
The six pitfalls inhibiting the transition to Positionless Marketing
McKinsey has identified six core problems preventing marketing teams from successfully evolving into the Positionless model. Of these, only one is about technology. All the others are about how leaders and teams are getting in their own way.
Unclear objectives push teams toward activity metrics instead of outcomes. When marketing goals are vague, execution defaults to roles and handoffs rather than impact.
Misaligned governance creates approval layers that add days to decisions that should be faster. In marketing, excessive controls directly conflict with the speed required to deliver customer value.
Uncommitted leaders manage through silos rather than enabling autonomy, preventing marketing teams from evolving past role-based dependency.
Stagnant marketing culture resists experimentation even when the right tools are in place, slowing execution regardless of technology investment.
Muddled marketing execution, with unclear process ownership, leaves no single person accountable for results, and performance erodes accordingly.
Disconnected technology reinforces data compartmentalization and separation of tasks among sub-teams, making strategic alignment and agile responses virtually impossible.
These are the realities of assembly-line marketing operations — not Positionless ones. Insights live with analysts. Creativity lives with designers. Activation lives with engineers. Value disappears in the spaces between them.The assembly line was built for control. It was never built to deliver value.
How McKinsey’s Blueprint helps build positionless marketing teams (and why the effort pays off)
McKinsey’s “Organize to Value” blueprint proposes a fundamental shift: design organizations around value creation, clear outcomes, impact over job titles and minimal friction execution. It provides the foundation to become Positionless and build the conditions for marketing teams to keep customers for life.
To make Positionless Marketing a reality, marketing leaders should focus on pragmatic application and the aspects that most influence marketing execution.
Start with purpose and behavior. Make explicit why actions are taken, alongside what is delivered. A shared sense of purpose allows teams to make fast decisions without waiting for approval on each one.
Restructure work around outcomes and accountability. Map current processes and identify where approvals slow execution without adding value. Build cross-functional flexibility over time rather than reorganizing overnight.
Leadership and processes. Establish a clear decision-to-execution flow and set explicit expectations for how fast each part of the marketing process should move. Processes should enable flow, not control.
Governance, technology and talent. Effective governance ensures consistency without slowing execution. Technology and AI should unlock new value, not just automate existing processes. And talent should be deployed based on what the work requires, not what a title suggests.
Empower marketers to act beyond their role. Once purpose, accountability, process and technology are aligned, marketers should be free to step across traditional job functions and execute independently as Positionless Marketers. The measure of success is not role compliance; it is value delivery.
These changes require sustained commitment. But the alternative (an assembly-line structure that was never built to deliver customer value) is far costlier than the transformation itself.
The results speak for themselves. In addition to Caesars:
FDJ United implemented Positionless Marketing to eliminate overlapping platforms, remove reliance on other teams wherever possible and enable continuous improvement through real-time measurement. Campaign time was slashed from six weeks to hours, with end-to-end campaigns now executed by one marketer from ideation to analysis.
A major retailer achieved a 16.1x increase in purchase rates while saving 300 working hours per year with the same team size. The shift to Positionless Marketing allowed the team to scale personalization and impact without adding headcount… demonstrating that the framework’s value is not just speed of execution, but the ability to do fundamentally more with what you already have.
The window to act is narrowing
The technology and AI tools are here and ever evolving. Today, AI generates infinite creative variants. Data platforms surface real-time behavioral signals. Decisioning engines coordinate across channels instantly.
But technology layered on top of an assembly-line structure creates the illusion of progress. The same handoffs happen. The same approvals add the same delays. Speed arrives at the edge; the bottleneck stays in the middle.
External pressures are accelerating. Customers expect personalization and the best experience across all channels. Competition is rising and growing more complex.
Marketing leaders who wait for transformation will find their competitors have already made it. The ones moving first are pulling ahead.
McKinsey confirms what the best marketing teams already know: the right structure and technology unleash human potential — and vice versa. Smart people trapped in the wrong system will still underperform. The best AI tools in the world won’t deliver results when constrained by the wrong organization.
McKinsey’s blueprint is pointing out the way. Positionless Marketing is the destination.
Google Ads is set to enhance the viewer experience of Performance Max video ads with an innovative asset optimization feature. Leveraging advanced AI voice models, this update aims to infuse video ads with realistic voice-overs, ultimately enhancing user engagement and ad performance.
Why we care. Advertisers who don’t actively opt out by March 20, will have their video ads automatically enhanced with Google’s AI voice models, changing how their ads sound to viewers without requiring any creative production work.
How it works.
The feature only activates on videos that don’t already contain a voice track
Google’s AI selects text from advertiser-provided headlines and descriptions, then generates a realistic voice-over from that copy
The voice-over is layered onto the existing base video and saved as a new video asset
The catch. This is opt-out, not opt-in. The default setting means ads will be automatically eligible for voice enhancement unless advertisers proactively disable it.
Key dates. Advertisers can choose to exclude their ads from this feature until March 20th. To do so, they must opt out of the video enhancement control. After the opt-out period, all ads with video enhancement control enabled will automatically be eligible for voice-enhanced versions.
Action steps for advertisers. Advertisers can adjust their video settings by visiting their ads in Google Ads.
First seen. This update was shared by Paid Search expert Arpan Banerjee who shared the update on LinkedIn.
OpenAI is updating its privacy policy with new details on ads, data usage and upcoming features across its products, including ChatGPT.
The update was shared with ChatGPT users and outlines how advertising will work inside ChatGPT — and what data advertisers can and cannot access.
Why we care. OpenAI’s update makes it clear that user privacy is a top priority: personal chats, histories, and details are never shared with advertisers. Ads can still be personalized using anonymized engagement signals, meaning brands can reach relevant audiences without compromising sensitive data.
This approach lets advertisers measure performance safely while building trust with users in a privacy-conscious environment.
Ads in ChatGPT Ads may appear for users on Free and Go plans, while paid tiers — Plus, Pro, Enterprise, Business and Education — will remain ad-free. OpenAI says ads will always be clearly labeled as sponsored and visually separated from chatbot responses.
The company also stresses that advertising will not influence answers generated by ChatGPT.
How ad targeting works. OpenAI says ads may be personalized using signals that stay within ChatGPT, such as ad interactions or the context of a user’s chat. However, the company says advertisers will not have access to conversations, chat history, personal details or user memories.
Instead, advertisers will only receive aggregated performance metrics such as total views or clicks.
Other privacy updates The revised policy also introduces optional contact syncing to help users find friends who use OpenAI services. Users can choose whether or not to enable this feature.
OpenAI also added new transparency around how long data is stored, how it is processed and what controls users have over it.
Safety and product changes. The policy update also references new tools and safeguards, including age prediction systems designed to create safer experiences for teens. OpenAI also added documentation for newer features and projects such as Atlas, Sora 2 and parental controls for teen accounts.
Bottom line. As OpenAI expands advertising in ChatGPT, the company is emphasizing strict boundaries around user privacy — promising advertisers performance insights without access to personal conversations or user data.
First seen. This update was first shared by Paid Media expert Arpan Banerjee who shared tips on this message on LinkedIn.
Google has confirmed that Google Marketing Live 2026 will take place on May 20, when the company is expected to unveil its latest updates across advertising, AI, measurement and campaign automation.
The date surfaced in an email received by PPC News Feed owner Hana Kobzová from the Accelerate with Google program, which invited participants to submit entries for the Google Ads Impact Awards.
According to the message, winners of the awards will be announced during Google Marketing Live 2026.
Why we care. The annual event has become one of the biggest announcement days for advertisers using Google Ads. Google Marketing Live is where Google typically announces its biggest changes to Google Ads — including new AI features, campaign types and measurement tools that can directly impact how campaigns are built and optimized.
Many of Google’s most significant advertising updates each year are first revealed at this event, meaning it often shows where the platform — and advertisers’ strategies — are heading next.
The bigger picture. The event will land during the same window as Google I/O 2026, scheduled for May 19–20. While I/O focuses on Google’s broader ecosystem — including AI, Search and developer technologies — announcements there often influence the direction of advertising products.
What to watch. Expect updates tied to AI-driven advertising, automation and new ways to measure performance across Google’s platforms. For marketers, the event often sets the tone for where Google’s ad strategy is heading for the rest of the year.
First spotted. Kobzová shared the update on PPC News Feed
AI tools now generate 45 billion monthly sessions worldwide — about 56% of search engine volume, according to a study by Graphite.io CEO Ethan Smith.
The analysis combines web traffic and mobile app usage across major AI tools and estimates AI activity equals 56% of global search usage and 34% in the U.S.
Much of this growth is occurring in mobile apps such as ChatGPT, Gemini, Perplexity, Grok, and Claude.
Why we care. AI is expanding discovery, not shrinking search demand. Total usage across search engines and AI assistants has grown 26% globally since 2023. In other words, it’s not SEO vs. GEO — you need both LLM visibility and traditional rankings.
The details. The report analyzed usage across the five largest LLM products — ChatGPT, Gemini, Perplexity, Grok, and Claude — and compared them with the six largest search engines. Key findings:
AI platforms generate 45 billion monthly sessions worldwide.
In the U.S., AI accounts for 5.4 billion monthly sessions.
83% of global AI usage occurs inside mobile apps (75% in the U.S.).
ChatGPT dominates AI usage, representing 89% of global AI sessions.
When isolating search-like prompts (“asking”), AI usage equals 28% of search worldwide and 17% in the U.S.
The report excludes prompts categorized as “doing” or “expressing.” According to OpenAI research, about 52% of prompts are information-seeking, the closest equivalent to traditional search queries.
Between the lines. Most projections comparing AI to search use web traffic alone, typically comparing Google.com visits with ChatGPT website traffic. That misses most AI usage.
The analysis argues these comparisons underestimate AI activity by 4–5x because most usage occurs in mobile apps.
It also includes multiple LLMs and multiple search engines rather than comparing only Google and ChatGPT.
What to watch. Google still dominates discovery, but its share of search-related activity fell from 89% in 2023 to 71% in Q4 2025, the report estimates.
Global AI usage appears to have plateaued since July 2025, while U.S. usage continues to grow rapidly — up roughly 300% year over year by December 2025.
Like many people, you’re worried about losing your job to AI.
Where do your “old school” PPC skills fit as AI agents take over more of the work?
Relax. It’s not that binary. The focus is shifting toward data and strategy.
From the outside, it looks like media buying is being automated away. But let’s set the record straight: it isn’t. The role is shifting (again).
I’ve been working in PPC for over 15 years, and there’s nothing to be afraid of. The real question is: are you riding the wave or being left behind?
Let’s map the current PPC landscape: ad network automation and, most importantly, where PPC teams create value today — the critical skill sets and team structure required to compete.
The return of the technical PPC team
A decade ago, technical PPC agencies differentiated through developing scripts, handling data at scale, and managing complex structures. Then automation matured. Everybody started leveraging Performance Max or Advantage+ campaigns because they’re much easier to set up and run.
As a result, many teams shifted toward strategy and creative.
With AI, though, it’s easier than ever to produce good-enough creatives or analyze massive datasets and output what looks like a good strategy. Now don’t get me wrong, those outputs won’t be perfect but:
It’s free (sort of) and fast.
The quality level isn’t bad at all (not great either).
From a client perspective, this means the average creative-focused or strategy-nerd agency is out of the game. Those teams need skills AI can’t replace.
So rejoice, PPC people: the technical edge is back. It has morphed into something different for sure. But it’s time to bring back the spreadsheet junkies from the 2010s. They’re the right ones to drive PPC again.
Doubting that? Let’s rewind a little bit and look at the necessary skill set.
The PPC edge: From spreadsheet skills to data nerds
What successful PPC agencies now sell is dramatically different than a decade ago. But the same core mindset resurfaced.
Why?
Let’s look at the core performance drivers these days:
Integrating down-funnel data into strategy.
Building a data infrastructure to support said strategy.
Feeding the right signals to ad algorithms.
Building systems to operate at scale, including creatives.
See the pattern? You can’t prompt your way out of a broken data model. This is where your edge remains and what clients value.
The good news is that automation increases the value of technical literacy. It doesn’t reduce it.
Who do you call to handle technical literacy? The old PPC marketers. The ones who loved manipulating paid search ads using custom Excel macros they built, or managing hundreds of thousands of product feed items. They have the right mindset: they love automation, data, and math — and they love PPC.
So who should be on your team, whether in-house or agency-side? Here are four essential roles. No single person can cover the entire scope — you need a team.
1. Data engineer
This role basically builds and maintains the infrastructure. Although located after the tracking specialist in the data supply chain, it’s the most central role. That’s why it comes first.
We operate in a complex, multi-platform world: think CRM integration with Google Ads. Or merging online and offline datasets to map the customer journey and drive strategy.
Without a complete data model, your strategy becomes a vague gut feeling that often needs a reality check. The role of the data engineer is to lay the foundation to avoid this situation whenever possible.
Conversely, without this role on your team, you’ll perform repetitive manual exports, get inconsistent numbers across teams, and end up with slow decision cycles.
What is the data engineer’s scope?
Building a data infrastructure basically follows an ETL process: extract data, manipulate it, and make it usable in a reporting tool (think Looker Studio, Power BI, or Tableau).
Here are a few tasks that illustrate that overarching goal:
Build data pipelines from ad platforms, analytics or CRM tools to the data warehouse (to get spend, revenue and other data into the warehouse).
Structure tables for those sources and “join” (merge) them to answer specific use cases.
Maintain those datasets and create automated QAs, including refresh schedules.
What skill sets and tools does the data engineer use?
Generally speaking, since we live in a Google-first world, we hear a lot about BigQuery, Google’s data warehousing solution. There are other solutions, such as Microsoft Azure. However, the main skill set you’re looking for is coding — more specifically, SQL and Python.
The goal here is to use those languages to structure tables within the data warehouse (using SQL) and create data pipelines (using Python).
2. Tracking and measurement architect
Some people consider this to be the same role as data engineers. I strongly disagree.
To me, this role’s sole focus is to protect signal quality. It’s the one person who faces very tight deadlines when things go wrong: you can’t afford to lose conversion data for more than a couple of days. And it’s not retroactive: when tracking is down, conversions are lost forever.
Ad platforms’ performance stands on the shoulders of conversion data. If you don’t get enough of those quality events, you’ll be at a serious competitive disadvantage.
You typically notice this when CPAs fluctuate without explanation or when your in-platform data varies drastically from your “source of truth” (GA, CRM and other systems). Tracking and measurement architects stabilize bidding, increase event match quality and get more data into Google Ads.
What is the tracking architect’s scope?
They design data collection mechanisms that are both complete and regulation-compliant (hello, GDPR):
Align tracking with privacy compliance.
Design client- and server-side tracking.
Implement GTM and server containers.
Co-manage Conversions API integrations with the data engineer.
Co-ensure deduplication logic with the media buyer.
What skill sets and tools does the tracking architect use?
Although most PPCs have dabbled with Google Tag Manager, very few have actually set up server-side tagging infrastructure. That’s an easy way to distinguish “regular” PPCs from a tracking specialist. However, they should also be comfortable with Consent Mode frameworks, CAPI, and related tools.
If the data engineer builds the pipes and the tracking architect protects the signal, the data analyst decides what the data means.
It’s the role most impacted by AI. Granted, you can do a lot with AI, but don’t underestimate how impactful a great data analyst is.
The wrong interpretation can waste millions of dollars in a blink of an eye. Fully replacing data analysts with AI would be a gross mistake.
For example, ROAS in Google Ads doesn’t equal contribution margin. Meta Ads CPA doesn’t equal customer lifetime value.
Without a strong data analyst, you risk misinterpreting data and going down the wrong rabbit hole. Think cutting campaigns that look inefficient short-term but drive long-term value. Or reporting different “truths” to marketing and finance — you don’t want that.
What is the data analyst’s scope?
People outside the field think they build Power BI or Looker Studio dashboards. That’s just the tip of the iceberg. Data analysts also:
Design data models aligned with business KPIs (this step kind of overlaps with data engineers at times).
Run analysis — think cohort performance, churn rates, profitability, and diminishing returns.
Challenge platform narratives.
What skill sets and tools does the data analyst use?
I tend to think of data analysts like translators: you can speak another language somewhat fluently, but that doesn’t make you qualified to interpret at scale. Same with data analysts: you may understand numbers to an extent, but you probably still need an analyst.
SQL literacy is often required to query the warehouse directly. Spreadsheet modeling also remains critical for scenario planning. The key skill is statistical reasoning. Understanding sample size, variance, and bias prevents false conclusions.
4. CRO and experimentation lead
Once all that data is clean, available, and analyzed, CROs leverage it to improve the economics of every visitor. Improving conversion rate, lead quality, and the overall customer journey creates a compound effect.
The simple way of proving CROs’ worth is to understand that a landing page that converts at 1.5% instead of 3% means you’ve doubled your CPA. Nobody wants that. And that’s where CROs come in. Instead, you want to scale efficiently, not push more money toward a leaky bucket.
From a PPC standpoint, CROs strengthen both performance (better conversion rate) and signal quality (more conversions), which helps smart bidding.
What is the CRO’s scope?
Contrary to common belief, CRO doesn’t (solely) mean landing page. This role operates across the full funnel:
Mapping the journey from impression to revenue.
Identifying online friction points using heat maps and session recordings.
Structuring testing roadmaps instead of random experiments.
Collaborating with creative and product teams on offer positioning.
What skill sets and tools does the CRO lead use?
The entry stack I see most often is GA4 and a heatmap tool such as Hotjar. However, it can get much pricier with tools such as ContentSquare. The stack scales depending on the client’s needs and budget.
The skills that matter most are:
Just like data analysts, a deep understanding of math and statistical reasoning (think pre-calculated sample sizes).
A structured mindset, clear hypotheses, and business-level success metrics.
The modern PPC team looks less like media buyers and more like a hybrid between marketing, data, and product. The advantage goes to teams that structure these capabilities deliberately.
Winning PPC teams are the ones who understand algorithms, but more importantly, the data and economics behind them. If your team masters infrastructure, signal design, analysis, and experimentation, AI becomes leverage. If not, it becomes a liability.
OpenAI has begun testing ads in ChatGPT for a limited set of U.S. users, with placements clearly labeled as sponsored. The platform’s internal economics suggest it’ll be available to everyone sooner rather than later.
When it does, advertisers will have access to a rare new channel for demand capture. But advertisers should enter this space with their eyes wide open.
For ChatGPT advertising to be successful, consumer behavior will need to change. And even if it does, ChatGPT won’t expand the advertising market. It’ll redistribute it.
Why ChatGPT is moving into ads
The fact that ads have arrived on ChatGPT should come as no surprise. By some estimates, a large language model (LLM) query costs 10 times as much as a traditional search query. With 2.5 billion prompts every day, ChatGPT’s expenses add up quickly.
What’s different isn’t the business model shift itself. It’s the data environment. Users have spent years feeding personal information, questions, and ideas into ChatGPT. In many ways, the platform knows more about its users than any comparable advertising tool. The big question now is how ChatGPT will harness this data to target users.
Advertising historically relied on generating demand: repeating a message enough times that buyers eventually acted. Search changed that by meeting buyers at the moment of intent.
ChatGPT has the potential to follow the search model, but with more context. It’s easy to envision a scenario where someone asks which security camera will work with their existing system. The platform already knows everything about the user’s security system, so it delivers the correct answer and a link to purchase.
When this happens, ChatGPT will be the first new demand-capture channel to emerge since Google launched pay-per-click ads nearly two decades ago.
But right now, there are a few significant barriers preventing this from happening.
For starters, most current AI queries lack purchase intent. Instead, they’re mostly informational: lists of Super Bowl halftime performers, storm-preparation tips, and workout routines. Compare that with existing platforms like Amazon and Google, which have spent decades training users to search with intent.
Even when users do shop through AI, there’s an attribution problem: consumers often use ChatGPT for research, then complete the purchase on Amazon, Google, or directly on a brand site. That breaks clean conversion tracking and makes “proof” harder than “impact.”
These challenges aren’t impossible to overcome. Google went through the same process early on as it transitioned from a homework tool to a shopping platform. But it took time.
ChatGPT will also need time to train consumers to use AI for shopping. So expect to see ChatGPT begin running commercials designed to train consumers to move from research queries to purchase-oriented ones.
While the possibility of a genuinely new demand-capture advertising platform is undeniably exciting, be realistic about its true potential.
AI can do many things exceptionally well, but it won’t expand the advertising pie. ChatGPT ads won’t suddenly introduce a surge of new consumers into the market. Ecommerce purchases will continue to grow at the same rate regardless of which new advertising platforms come online.
Instead, ChatGPT will capture a portion of the existing advertising share from Google, Meta, and Amazon. Consequently, advertiser budgets will likely shift rather than grow significantly.
ChatGPT’s largest competitors won’t give up market share without a fight. Google, in particular, has its own AI platform, Gemini, and an existing group of active advertisers it can draw from. These are powerful competitive headwinds for ChatGPT, which is recruiting its first group of advertisers from scratch.
Competition will be fierce among AI platforms as they race to reach profitability, and market consolidation seems inevitable. But even in that environment, ChatGPT has an opportunity to do something other platforms can’t.
The differentiator: Hyper-personalization
AI queries already lean heavily toward information gathering. Users employ these tools to help them plan everything from vacations to workout routines to tough conversations with their bosses. Taken together, AI platforms can learn more about individual users’ tastes and preferences than any other tool.
This capability unlocks hyper-personalization at scale.
Knowing everything that it does, AI can return perfectly tailored results with a one-click purchase option. Google and Amazon can’t match this capability because they still rely on users searching for particular specs, product names, or model numbers to deliver results.
There’s risk here. Hyper-personalization can feel invasive.
Some users will opt out entirely, just as some consumers avoid always-on devices in their homes. Meta ran into this dynamic years ago as public backlash forced changes in targeting and data practices.
This is where the distinction between demand capture and demand generation matters. Demand capture advertising generally feels less intrusive because it’s tied to a user’s explicit request. Most consumers will appreciate getting exactly what they ask for when they want it. But they’ll likely revolt if highly personalized and unsolicited ads start following them around the web.
If AI platforms can maintain that boundary, the convenience of hyper-personalization will ultimately win out for most users.
While OpenAI has already begun reaching out to select advertisers, it could be a year before we begin seeing widespread advertising on ChatGPT or other AI platforms. However, you should be prepared to move whenever that moment arrives.
So watch for official communications from OpenAI about ChatGPT advertising and, when possible, sign up for platform notifications.
In the meantime, you can make these few practical moves:
Align internally on measurement expectations: If the channel starts as research-heavy, last-click ROAS may understate performance. Build room for assisted conversions and incrementality.
Pressure-test mobile UX and checkout friction: Demand capture punishes slow experiences. If AI shortens the path to purchase, your site has to close quickly.
Plan conservative early tests: Being an early adopter carries risk (immature controls, evolving placements), but it also creates an edge: faster learning on a genuinely new demand-capture surface.
New demand-capture channels don’t come along often. ChatGPT advertising could become one of them, but the winners won’t be the brands that rush in blindly. They’ll be the ones who enter with a clear thesis, realistic measurement, and a strategy built around trust.
SEO professionals don’t agree on much. But over the past decade, we’ve come together around the conviction that Google has abused its dominant position, that it systematically favors its own products over better alternatives, and that something must be done to create fairer competition in search.
In 2022, the European Union passed the Digital Markets Act (DMA), a sweeping regulation designed to curb the power of tech giants. It came into force in March 2024.
Industry groups celebrated. Trade publications ran optimistic headlines about a new era of digital fairness.
In 2024, I wrote that it was “a much-needed piece of legislation.” Two years in, the evidence is clear: The DMA will do more harm than good.
Well-documented abuses
The Digital Markets Act arose from understandable frustrations with well-documented abuses.
Google spent years ranking its own shopping service at the top of search results while systematically burying competitors like Foundem and Kelkoo on page four, where nobody would ever find them.
The company’s internal documents, uncovered by EU investigators, revealed that Google Shopping “simply doesn’t work” on its merits, so Google gave it an algorithmic boost unavailable to anyone else.
The travel industry watched as Google Flights consumed the market share of innovative startups like Hipmunk, which had offered genuinely better user experiences by showing total trip costs, including baggage fees and connections.
Hoteliers saw Google Hotels siphon away direct bookings. Local businesses watched as Google prioritized its local pack over organic results.
The pattern was unmistakable: Google identified lucrative verticals, launched competing products, then used its search monopoly to guarantee their success.
These weren’t competitive advantages but unfair tactics, and the EU was right to identify them as such. It took over 10 years to fine Google £2.1 billion for the shopping search abuse alone. The DMA was supposed to fix this by setting clear rules upfront, forcing gatekeepers to treat all services equally before abuses could take root.
For those of us who had watched clients lose traffic to Google’s vertical search engines despite having superior content, the promise was intoxicating: Finally, algorithmic neutrality. Finally, fair competition based on content quality rather than corporate ownership. Finally, a chance for the next generation of search-dependent businesses to compete.
Yet, two years into implementation, the reality looks nothing like the promise. The most comprehensive assessment comes from Nextrade Group, which surveyed 5,000 European consumers across twenty member states in mid-2025.
The findings?
Two-thirds of respondents reported needing more clicks or more complex search queries to find what they need online. Among frequent searchers, precisely the users most valuable to our clients, 61% said searches now take up to 50% longer than before the DMA.
Forty-two percent of frequent travelers reported that flight and hotel searches had worsened significantly. More than 40% said they would actually pay to restore the functionality they had before March 2024.
When users are willing to pay for something they previously received for free, regulation has failed catastrophically.
Eighty percent had never heard of the DMA, it solved problems they didn’t know existed, yet 39% reported that routine online tasks had become more cumbersome since early 2024.
Why does it matter?
As SEO professionals, we must confront this truth: Users preferred the integrated Google experience we spent years complaining about.
Before the DMA, searching for “hotels in Paris” displayed an interactive map with photos, ratings, real-time availability, and prices — all accessible without leaving the search results page.
That integration has been dismantled because Google Search and Google Maps are designated as separate core platform services, and their seamless cooperation constitutes prohibited self-preferencing.
Users must now click through to separate services, repeat their searches, and lose context. Regulators call this fair competition. Users call it a worse internet.
The business impact: Worse metrics across the board
The company still processes over 90% of European search queries. The difference is that now the search experience delivers measurably worse results for users and measurably worse outcomes for businesses paying for visibility.
The enforcement problem: Fines don’t work
The DMA requires Google to treat competing vertical search services (flight comparison sites, hotel booking engines, shopping aggregators) with the same prominence as its own offerings.
In response, Google tested a version of its hotel search that removed maps, removed structured listings with photos and availability, and displayed only 10 blue links. Users hated it.
Hotels saw a traffic crater. Google documented the catastrophic user satisfaction scores and presented them to the Commission as evidence that integration serves user needs, not just Google’s interests.
The Commission found itself in an impossible position: Force Google to maintain the worst experience in the name of fairness, or acknowledge that some integrations genuinely benefit users even when they advantage Google’s products.
Google responded to preliminary findings of non-compliance by making incremental adjustments that preserve the substance of its advantage, while creating just enough ambiguity about whether it’s following the rules.
When the Commission objects to one implementation, Google proposes another that differs in form but not effect. This process can continue indefinitely because the underlying problem, Google’s monopoly in search, remains untouched.
For a company with annual revenues exceeding $300 billion, regulatory fines are simply a cost of doing business. The Commission fined Google €2.4 billion for shopping search abuses and breaking antitrust rules. The company paid and continued operating largely as before. It will do the same with DMA fines.
The uncomfortable reality is that you can’t regulate a monopoly into behaving competitively. You can only break the monopoly itself.
The European Commission must monitor 23 core platform services across seven gatekeepers, while each company releases updates continuously:
Algorithms change daily
Features launch weekly
Product roadmaps evolve quarterly
By the time the Commission identifies a potential violation, conducts workshops with stakeholders, issues preliminary findings, allows the company to respond, and publishes a final decision (a process taking 12-18 months), the underlying technology and business models have moved on.
Google launched AI Overviews in Europe one week after receiving preliminary findings of non-compliance for self-preferencing in traditional search. The company essentially announced that, while regulators debate whether Google Flights should rank above Kayak, Google is moving to a fundamentally different search results page where AI-generated summaries replace links entirely.
The DMA contemplated regulating 2024’s search landscape. Google is already building 2027’s.
What should regulators do instead?
While I’m not a regulator, I have been doing SEO for 15 years. In my opinion, regulators should redouble efforts to address actual structural monopolies rather than impose rules on how platforms must operate.
The DMA tries to regulate platform behavior while leaving monopoly power intact. This is like trying to stop water from flowing downhill by prescribing which route it must take. The water will find another path, and everyone gets wet in the process.
If Google’s dominance in search truly stifles competition, perhaps the solution isn’t to regulate how it displays results but to break its monopoly altogether. The United States has considered requiring Google to divest Chrome; such structural remedies might succeed where behavioral rules have failed.
If the concern is that Google leverages search dominance to advantage its advertising business, separate the two.
If the worry is that controlling both the search algorithm and the content (YouTube, Google News, Google Shopping) creates irresolvable conflicts of interest, then require differentiation.
These actions would be slower, more legally complex, and more politically difficult than passing the DMA. They would also actually work.
In short, regulators should focus on creating conditions for competition rather than micromanaging every product decision. That means enabling genuine data portability so users can switch services easily, taking their search history and preferences with them.
This also means using traditional antitrust enforcement aggressively for the largest abuses, like Google systematically burying competitors on page four, exclusive deals that lock out rivals, and acquisitions designed to eliminate nascent threats.
The geopolitical reality
The DMA’s first two years have demonstrated that ex-ante rules are no faster — investigations still take 12-18 months — and far less effective than traditional enforcement.
The geopolitical consequences threaten to undermine European interests far beyond digital markets. In December 2025, the Trump administration threatened retaliation against the EU for what it characterized as discriminatory targeting of American technology companies. The Office of the United States Trade Representative explicitly named European companies, including Spotify, Siemens, SAP and DHL, as potential targets for new restrictions.
From Washington’s perspective, the DMA looks less like competition policy and more like industrial policy disguised as regulation.
Whether that characterization is fair matters less than the political reality: Brussels finds itself caught between domestic pressure to demonstrate tough enforcement and external pressure that threatens broader trade relationships.
The DMA promised to enable the next generation of search-dependent businesses. It promised to stop Google from using its search monopoly to advantage its vertical products. It promised fairer competition for hotels, airlines, ecommerce sites, and the entire ecosystem of businesses that depend on organic search traffic.
Two years in, Google’s monopoly remains intact, user experience has measurably degraded, business metrics have worsened, and no meaningful new competition has emerged.
For those of us who spent years documenting Google’s abuses and advocating for intervention, this failure is spectacular.
If regulators can’t find ways to break up long-standing monopolies (now over two decades old for some platforms), what hope is there to address emerging challenges in AI search, voice search, or whatever comes next?
Young companies have a right to compete in digital markets. Regulators must create conditions where genuine competition is possible, not regulate away the symptoms of monopoly while leaving its foundations untouched.
We were right about the problem. The DMA is simply the wrong solution.
If your organic traffic is down but impressions are up, AI is likely citing your content without sending clicks. If both are down, you’re being ignored. Either way, the search behavior your marketing strategy was built on has changed, and waiting for traffic to rebound isn’t a strategy.
This is the reality you’re facing in 2026. According to KEO Marketing:
73% of B2B websites saw significant traffic losses between 2024 and 2025, with an average 34% year-over-year decline.
The impact isn’t evenly distributed. If your content is primarily informational, you’ve likely been hit harder, with some sectors seeing organic traffic drop 15% to 64% since AI Overviews launched.
News publishers are especially exposed, with Google referrals down 33% globally in the 12 months ending November 2025.
These aren’t normal fluctuations. They reflect a structural shift in how people find information online, disrupting business models built on website traffic at the foundation.
What is driving the shift in organic discovery?
Organic clicks are declining for two overlapping reasons. You need to understand both because each requires a different response:
Google has engineered zero-click behavior for years through featured snippets and knowledge panels. These SERP features answer queries directly on the results page, so you don’t need to click through to get an answer. Ten years ago, about 25% of searches ended without a click. Today, it’s more than 65%. AI Overviews — now appearing in ~16% of desktop searches and ~41% of mobile searches — have dramatically accelerated this trend.
A growing share of users is bypassing traditional search entirely. Nearly 52% of U.S. adults now use AI tools regularly, and about 28% of employed Americans use AI at work. When someone asks ChatGPT or another LLM a question, they usually get an answer without visiting any website. Your content may inform that answer, but you get no traffic and no attribution.
What metrics should I consider when measuring AEO?
Traditional content marketing KPIs (impressions, clicks, CTR, sessions, bounce rate, and page views) no longer show you how discoverable your brand is. They measure behavior on your site, not how you perform in AI answers that now intercept much of your traffic upstream.
Five metrics matter most for AI visibility:
Citations in AI responses measure how often your owned content is directly cited when an LLM answers a query. A citation signals three things: your content is relevant, it’s structured so LLMs can parse and retrieve it efficiently, and your domain has enough authority to be trusted.
Brand mentions are different from citations. LLMs often mention brands without citing owned content, pulling from review sites, forums, third-party articles, and competitor content. A mention without a citation means the broader web is talking about you, but your content isn’t the source. That distinction helps you decide where to invest.
Share of voice compares your citation and mention frequency against competitors across a defined set of category-relevant prompts.
Brand sentiment tracks whether AI responses frame you favorably, neutrally, or negatively.
AI-influenced traffic measures how much of your traffic comes from LLM referrals. Early data suggests this traffic converts three to five times higher than other sources, making it worth tracking even at low volume.
Several tools now let you track these metrics at scale without manually prompting LLMs. They’re worth exploring.
But even a simple benchmark — prompting major LLMs with your target queries and tracking where and how you appear — is better than not measuring at all.
How should I optimize my content for AEO?
Winning visibility in AI search doesn’t require an entirely new content playbook. But it requires retiring practices that no longer work and doubling down on principles that matter more than ever.
E-E-A-T remains the foundation
Experience, Expertise, Authoritativeness, and Trustworthiness were dominant signals in Google SEO before AI Overviews, and they remain dominant in AEO. LLMs prioritize sources that show real expertise and are trusted by other authoritative sources.
If you earn citations from credible sites, publish content written by clear subject matter experts, and cover topics with depth and specificity, you’ll consistently outperform content that doesn’t — regardless of how well it’s optimized for other factors.
Structure and clarity have become non-negotiable
LLMs retrieve content by identifying passages that directly answer questions. If you organize content around clear questions and direct answers, use structured bullet summaries, and avoid dense paragraphs, you’re more retrievable than if you bury answers in narrative prose.
This means making your information architecture legible to both human readers and LLM retrieval systems. Adding a Q&A section to existing content — or restructuring posts around clear question-and-answer pairs — is one of the highest-leverage updates you can make right now.
Human-written, human-led content has a measurable advantage
After Google’s latest core update, mass-produced AI content saw an 87% drop in rankings and citation frequency, and keyword-optimized content fell 63%. LLMs are getting better at detecting AI writing patterns and deprioritizing that content.
The pressure you felt in 2025 to produce volume with AI created a quality problem that’s now visible in performance data. The strongest strategy is quality over quantity. If you use AI, use it to draft and edit—not to generate final content. Add a review step to flag generic phrasing or a synthetic tone, whether through AI-detection tools or human editors.
Recency matters for AI citation
Answer engines look at publication and update dates when choosing sources. A well-structured, authoritative piece from 2022 can be overlooked in favor of an updated version from 2025.
Audit your high-traffic pages and hero assets for outdated content, and refresh them with current data and examples. It’s a quick win many teams miss.
Pitchy language will not get cited
If your content reads as promotional — leading with product claims and brand-forward language — answer engines will often deprioritize it in favor of more objective sources.
That doesn’t mean you can’t mention your product or brand. It means you should write about it the way a neutral third party would: acknowledge tradeoffs, provide context, and let the facts make the case. Listicles and comparison articles work especially well here.
AI systems respond to structured, objective comparisons—even when one option is clearly favored.
Outside of my owned channels, what content performs well in AEO?
One clear pattern in how LLMs decide which brands to mention: they look for consensus across multiple sources, not just your content. If you appear only on your own blog, you’ll lose to a brand with fewer owned assets but stronger third-party coverage.
That makes your external content ecosystem a strategic priority. Reviews on G2, Capterra, Google, and similar platforms are often used in AI training. User-generated content on Reddit and other forums is heavily indexed. Third-party articles, tutorials, YouTube videos, and newsletter mentions all build the multi-source consensus that gets you cited in AI answers.
Content partnerships deserve focused attention. When you sponsor articles or newsletter placements with relevant publications, you do two things: drive referral traffic outside search and earn trusted external citations that boost AI visibility. Newsletter readership is growing as audiences seek curated, human-authored content. YouTube citations are especially strong and increasing, and ChatGPT shows a documented preference for citing authoritative video creators.
The goal isn’t to manufacture mentions. It’s to tell a consistent story about your brand across credible external sources so LLMs encounter that story repeatedly. Consistency across partners, review platforms, and third-party content compounds your AI share of voice.
How do I build landing pages that convert traffic better?
With organic traffic down 30% or more, the visitors who reach your site are more valuable and more intentional than in past years. That makes conversion optimization on key landing pages more important.
The principle is simple: one offer, one message, minimal copy.
Each landing page should have a single call to action and a single argument. If you have multiple conversion goals, create multiple landing pages — not one page trying to do everything.
Your header should capture the full value proposition. Supporting points should be brief. A visitor should understand the offer and act without scrolling.
This differs from blog and thought leadership content, which should be detailed, well sourced, and structured for LLM retrieval. The two serve different purposes and require different standards. Conversion-focused landing pages aren’t the place for nuance or extended prose.
The takeaway
The traffic decline isn’t a temporary setback that will correct itself. Users are getting answers from AI instead of clicking through to websites, and that behavior will intensify. A content strategy built only around ranking for clicks is no longer enough.
What replaces it is a dual mandate: optimize to be cited by answer engines and build the external brand presence that gives LLMs reason to mention you consistently. These goals align with what you should’ve been doing all along — publishing clear, authoritative, well-structured content grounded in real expertise.
The brands that will win in AI-driven discovery are the ones doing the fundamentals well: building real credibility, earning trusted external mentions, and writing for readers instead of algorithms.
That was always the right approach. AI search has simply made it mandatory.
John Mueller from Google said you can block a complete TLD, top-level-domain, using the link disavow tool. He said it is not something Google documents because “Given how big of a hammer it is, I don’t know if it’s something we should really suggest in the docs.”
How does it work. All you need to do is use the syntax “domain:abc” in the disavow file. John posted this one Bluesky saying:
“If you’re sure that it’s what you want to do, you can use “domain:abc” in the disavow file. Keep in mind that you can’t carve out specific domains if you like some, but if you find the TLD is almost only annoying spammers, it’ll save you time.”
He later added:
“Given how big of a hammer it is, I don’t know if it’s something we should really suggest in the docs. I’m sure all TLDs have some good sites.”
Why we care. If there is one TLD that is concerning to you, sure, you can go ahead and disavow the whole TLD. But it might be better to be more selective of how you use the disavow file and don’t just block TLDs at a whole.
Most people fail on Reddit because they write comments like ads. Reddit eats ads for breakfast. It’s better to follow a Reddit comment framework that has been proven thousands of times to get engagement and increase visibility and awareness.
The winning move? Be useful first, be human, then casually exist as a company.
Below are 10 proven comment frameworks we see working every single day for our clients. These aren’t scripts: they’re thinking patterns. Follow the structure, swap in your context, and your comments will feel native instead of needy.
1. The ‘been there done that’ comment
When to use it: Someone is struggling or asking how to do something you already solved.
Framework:
Start with personal experience.
Share the mistake you made.
Share what finally worked.
Optional soft mention at the end.
Example:
“I ran into this exact issue last year. We tried brute forcing it at first and wasted a ton of time. What finally worked was narrowing the scope and fixing one variable at a time. Once that clicked, everything sped up. We ended up building a small internal tool for it, but honestly the mindset shift mattered more than the tool itself.”
Why it works: You’re relatable first, helpful second, promotional last. Reddit rewards vulnerability over authority.
2. The counterintuitive insight
When to use it: A thread where everyone is repeating the same advice.
Framework:
Acknowledge the common advice
Gently challenge it
Explain why it fails
Offer a smarter alternative
Example:
“A lot of people say to just throw more money at ads here, but that actually made things worse for us early on. The real unlock was fixing the messaging before scaling anything. Once we did that, even small campaigns started working. That lesson ended up shaping how we approach this for clients now.”
Why it works: Reddit loves contrarian thinking when it’s earned through experience, not just hot takes.
When to use it: Someone is about to make a common and expensive mistake.
Framework:
Validate their plan first.
Warn them about one specific pitfall.
Explain exactly how to avoid it.
Light credibility hint without bragging.
Example:
“This can work, but one thing to watch out for is scaling too early. We made that mistake and burned a few months before realizing it. If I were doing it again, I would test manually first before automating anything. That lesson came from doing this across a lot of campaigns.”
Why it works: You sound like a guide who’s walked the path, not a salesman with an agenda.
5. The data point drop
When to use it: A discussion that’s heavy on opinions and light on facts.
Framework:
Drop one real, specific data point.
Explain what it changed for you.
No links unless someone asks.
Keep the number believable, not boastful.
Example:
“One interesting data point from our side: When we switched from generic responses to context-specific replies, engagement nearly doubled. Same audience, same platform, different framing. That small change ended up influencing how we now coach others to comment.”
Why it works: Reddit respects numbers when they’re not flexy. Specific beats vague every time.
When to use it: You want to add value without preaching or taking over the conversation.
Framework:
Answer their question briefly.
Ask a smarter follow-up question.
Let the thread continue naturally.
Don’t hijack the conversation.
Example:
“This usually comes down to timing more than tools. Out of curiosity, are you trying to solve this for growth or retention? The advice changes a lot depending on that.”
Why it works: You move the conversation forward instead of hijacking it. Shows you’re thinking strategically.
7. The ‘I disagree but respectfully’ comment
When to use it: You genuinely disagree with the top comment or popular opinion.
Framework:
Acknowledge their point has merit.
Explain your different experience.
Offer an alternative perspective.
Stay humble and curious.
Example:
“I get why this approach works for some teams. We actually saw the opposite result when we tried it. In our case, simplifying the workflow beat adding more features. Might depend on team size, but worth testing both approaches.”
Why it works: You avoid Reddit flame wars while still standing out from the echo chamber.
When to use it: Someone asks what tools or services to use.
Framework:
Mention multiple options first.
Explain when each makes sense.
Include yours as one of many choices.
Focus on fit, not superiority.
Example:
“There are a few ways to do this depending on your budget. Some people go fully manual, others use spreadsheets, and some use dedicated platforms. We landed on building our own because of volume, but for most people starting out, simplicity wins over features.”
Why it works: You don’t look biased even when you are involved. Builds trust through honesty.
When to use it: Someone asks if something is worth trying or worth the investment.
Framework:
List two to three things that worked
List one to two things that didn’t work
End with a grounded, practical takeaway
Keep it balanced and realistic
Example:
“What worked for us was consistency and context-awareness. What didn’t work was blasting the same message everywhere. The biggest lesson was that Reddit rewards effort more than polish. Once we leaned into that philosophy, results followed naturally.”
Why it works: Balanced honesty builds trust fast. Shows you’ve done the work and learned from failures.
10. The quiet authority comment
When to use it: You want to establish credibility without saying exactly who you are.
Framework:
Speak calmly and confidently
Avoid hype words and superlatives
Reference patterns, not individual wins
Let experience speak through your perspective
Example:
“We see this question come up a lot in our work. Usually, the issue isn’t the platform but how people enter the conversation. Threads that already have momentum respond very differently than empty ones. Adjusting for that context alone fixes most engagement issues.”
Why it works: You sound like someone who has seen this movie before. Authority through pattern recognition, not bragging.
How to subtly recommend your company without getting banned
Here’s the golden rule: Your company is context, not the point.
Good: “We ended up building this internally, which changed how we approach it now”
Bad: “Check out our product it does exactly this”
The magic happens in your profile. When your comment gets upvoted, people click through to see who you are. That’s where the real conversion happens: not in the comment itself.
The frameworks above work. But having someone implement them consistently? That’s what turns Reddit into a real growth channel.
By implementing the Reddit Comment Framework, brands can achieve greater visibility.
Google’s AI Mode is increasingly citing Google itself — and often sending users back to another Google search, according to new SE Ranking research.
Why we care. AI search is meant to surface the best sources on the web. If Google increasingly cites itself, you may see fewer direct links and less traffic as more users stay inside Google.
The details. Google.com was the most cited source in AI Mode answers, accounting for 17.42% of all citations, SE Ranking found.
That makes Google.com the most referenced domain — more than the next six domains combined: YouTube, Facebook, Reddit, Amazon, Indeed, and Zillow.
Accelerating trend. In June 2025, Google cited itself in just 5.7% of AI Mode answers. That share is now tripled.
Nearly one in five AI citations now comes from Google. Including YouTube, Google-controlled properties account for roughly 20% of sources.
Self-preferencing on steroids. AI Overviews already link heavily to Google properties like Maps, Images, and YouTube. AI Mode appears to extend that approach by pushing users deeper into Google’s ecosystem, often through additional search results rather than external sites.
This keeps users interacting with Google surfaces where ads, reviews, and other monetized content appear.
What changed. Earlier AI Mode research showed Google mainly citing Google Business Profiles. That’s no longer the case:
59% of Google citations now point to traditional Google search results.
36.1% still reference Google Business Profiles.
Smaller shares link to Google Support (1.7%), Google Flights (0.1%), and other Google properties.
In many cases, AI Mode citations now show a mini search results panel beside the answer — effectively turning the citation into another search experience.
Industry differences. Google dominates citations across most topics. Some niches rely on Google even more:
Travel: 53.18% of citations
Entertainment & hobbies: 48.74% of citations
Real estate: 30.54% of citations
The only category where Google wasn’t the top source was Careers and Jobs, where Indeed appeared 3.1x more often than Google.
About the data. SE Ranking analyzed 68,313 keywords across 20 industries and more than 1.3 million AI Mode citations to measure how often Google.com appears as a cited source.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
Job Description Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the only vertically integrated AI infrastructure company built from the ground up, we own and operate each layer of the stack — from electrons to tokens — to power the world’s most ambitious AI workloads. When you join Crusoe, […]
Job Description Direct Agents is on the search for an SEO Analyst in our NYC office, who will assess and develop SEO strategy for a variety of mid to large-sized clients, and act as SEO expert for both internal and external teams. Who We Are Direct Agents is not a traditional agency. We are an […]
At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
About J&Y Law J&Y Law is a leading California plaintiff’s personal injury and elder abuse law firm headquartered in Los Angeles with multiple offices statewide. We are dedicated to protecting the rights of those injured through negligence and delivering the highest quality legal representation and client service. Our culture emphasizes Client Service, Quality Work Product, […]
Job Description Content Marketing Manager Location: El Segundo, California, USA (HQ) Reports To: Director of Marketing Role Type: Full-Time, On-Site Compensation: $75,000-$90,000 annually About QuikStor QuikStor is the leading SaaS facility management platform for the self-storage industry, delivering a purpose-built, scalable system that serves as the foundation for intelligent automation and modern facility operations. We […]
(Hybrid) Description We are expanding our marketing team and seeking a Content Marketing Specialist to play a key role in executing our marketing strategy. You will own the creation and publication of content across blogs, social media, email campaigns, and the company website, helping bring DMC’s brand to life across digital channels. A portfolio of […]
Job Description Position Title: E-commerce and SEO Specialist Compensation Range: $55,000 – $75,000 Location: Hybrid / On-site – Englewood, CO About GOLFTEC Enterprises: GOLFTEC Enterprises is a dynamic, technology-driven leader in the golf industry, uniting two premier brands—GOLFTEC and SKYTRAK—with a shared mission: to help people play better golf. GOLFTEC, the world leader in golf […]
The Role Wpromote is seeking a Senior Technical SEO Manager dedicated exclusively to the Southwest Airlines account. This isn’t a typical SEO role — it’s an opportunity to shape how a leading travel company competes in a transforming search landscape. You’ll be a key player focused on organic discoverability, cross-channel collaboration, and measurable revenue impact. […]
Our digital marketing agency helps multi-location home service brands generate leads across dozens of local markets. Our flagship client is a PE-backed home services company operating 40+ locations across the U.S. and Canada under several brands, and we are expanding our SEO team to support rapid growth. We are looking for a hands-on SEO Manager […]
Job Description Salary: $45,000-$50,000 DOE Position Summary We are looking for a creative and motivated Content Marketing Specialist to join our team in the roofing industry. This role is ideal for someone with at least one year of marketing experience who thrives in a fast-paced environment and enjoys creating visually engaging, results-driven content. You will […]
New York, NY We are currently seeking a Paid Search Manager for a rapidly growing media agency in NYC. This is an amazing opportunity for someone to make a name for themselves in the industry with this progressive and growing agency. The position will work across various high-profile national and global accounts, supporting account teams […]
Lamark Media (“Lamark”) is an integrated digital marketing firm driven by a simple philosophy: create extraordinary marketing campaigns that yield positive, measurable results for their clients and strategic partners. Lamark’s methodology is to create a custom omni-channel strategy that leverages digital marketing assets like a portfolio which can be measured, optimized, and scaled for long-term […]
Paid Search Marketing Manager We’re currently hiring a Paid Search Marketing Manager to join our growing remote team. Lead high-scale SEM programs across Google Ads, Bing Ads, and Local Services Ads (LSA) for a rapidly growing multi-location business. You’ll own strategy + execution, turning analysis into performance gains through rigorous testing, optimization, and KPI-driven decisions. […]
Add3 seeks a Paid Search Marketing (SEM) Account Manager who will be responsible for account optimization and creation of new campaigns across multiple client accounts leveraging industry best practices. The individual will support overall efforts and deliver on client needs while recommending new opportunities for account growth. This position will report to the pod/account director. […]
The Role Wpromote is looking for a sharp Paid Social Manager to manage scalable full-funnel paid social advertising campaigns across a portfolio of clients. This is a hands-on role: build campaign strategy, execute high-quality activations, optimize to business KPIs, and partner with cross-functional counterparts such as Creative leads to test social-first creative as well as […]
Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs
OpenAI is backing away from putting checkout directly inside ChatGPT. Instead, purchases will shift to retailer apps that connect to ChatGPT, The Information reported.
Why we care. ChatGPT aims to be more than a discovery engine. Right now, though, product discovery inside ChatGPT is gaining traction faster than purchases. That suggests AI-powered shopping is only influencing the consideration stage (at least for now), not driving conversions.
What happened. OpenAI had planned to let shoppers buy products directly from listings in ChatGPT search results. Instead, an OpenAI spokesperson said that Instant Checkout is moving to Apps, where purchases happen inside connected services rather than natively in ChatGPT.
The company will now prioritize product search and discovery inside ChatGPT.
It will also keep working with Stripe on the Agentic Commerce Protocol to support app-based transactions.
What changed: OpenAI found that users research products in ChatGPT but don’t complete purchases there. Only a small number of merchants were actively using native ChatGPT checkout, according to the report.
In September, OpenAI positioned Instant Checkout as a big commerce opportunity. At the time, it said U.S. users could buy from Etsy sellers inside ChatGPT, with plans to expand to Shopify merchants, add multi-item carts, and roll out beyond the U.S.
Meanwhile. Shopify president Harley Finkelstein said this week that only about a dozen Shopify merchants were using AI tools, despite Shopify supporting integrations with ChatGPT, Gemini, and Copilot. That’s tiny relative to Shopify’s overall merchant base.
What to watch. Can OpenAI make ChatGPT more valuable as a shopping discovery engine without owning the final transaction? Also, how does OpenAI’s commerce strategy intersect with its advertising ambitions? If transactions stay outside ChatGPT, monetizing product discovery through ads could become even more important.
Why this is happening. Two forces are slowing agentic commerce, according to Leigh McKenzie, director of online visibility at Semrush: infrastructure and trust. Real-time catalog normalization across tens of millions of SKUs is a decade-scale problem Google already solved with Merchant Center, and consumers still default to checkout flows they trust — Apple Pay, Google Wallet, and Amazon one-click.
Google’s Liz Reid, VP and head of Search, drew a clearer line between Google Search and Gemini but said it’s still unclear whether the products will converge, diverge further, or be superseded.
The big picture. Reid said Search is an information product focused on helping people connect with the web, while Gemini is centered more on assisting with productivity and creation. She added that the boundaries are fluid, especially as AI products evolve quickly and agentic experiences reshape how people use the internet.
What she’s saying. In short, Reid said Search and Gemini share technology but have different product “north stars.” They could overlap more over time, but the eventual long-term direction is still open. Here’s what she said in an interview on Access Podcast:
“I don’t know the answer is the short answer.”
“I think what we see is some areas they’re converging more and some areas they’re diverging more, right? And like and so what are they going to net out? Like do the areas that diverged eventually all come or do the areas that diverge become even bigger over time? I think we’ll see.”
“So I don’t know in in all honesty, but I think we are right now at a point where depending on what angle you look at, you’d think they’re getting closer or they’re getting further apart.”
“Who knows, maybe agents will mean like the right product is neither of the two of them is a third product altogether that they merge into. I don’t know yet.”
Gemini vs. Search. Here’s the distinction Reid made:
On Gemini: “Gemini’s focus is on sort of being this assistant and so it tends to lean in more heavily on things like productivity or creation, right?”
On Search: “Search is more information based and it believes that often in those information use cases you also want to connect and hear from other people. And so how do you bring out the web?”
Agents and the web’s future. Reid also said Google expects a future with more agent-to-agent internet activity, not just humans browsing directly.
“I certainly think the there will be a world in which sort of agents are doing a lot of interaction on the internet, not just people.”
“I do think probably means there’s a world in which a lot of agents are talking with each other, and not just with humans going forward as we evolve.”
Google vs. ChatGPT. Reid pushed back on the idea that AI is a simple winner-take-all battle between Google and ChatGPT.
“I don’t know, by the way, that we’re going to end up in a world where there’s only one product, right?”
“I think what we’re seeing is like simultaneously people are adopting more tools and search is growing, right? because the the possibility of the tech is just allowing many more questions.”
Trusted sources. Reid also said Google wants to do more to surface sources users trust or pay for.
“I think one thing Google is trying to do a lot more of and we’ve taken small steps so far but want to do more. How do you help when there is that relationship?”
“If you love this source and you do have a relationship with it then that content should surface more easily for you on Google.”
“We should surface the the one that they’re paying for and not the six that they can’t get access to more.”
Why we care. Reid’s comments suggest Google hasn’t settled on Search’s long-term role in an AI-first ecosystem. So keep watching closely as AI assistants, agents, and search results evolve.
Google is reaching out directly to advertisers via email, requiring them to confirm whether their campaigns contain EU political ads — with a hard deadline of March 31st.
Why we care. This isn’t optional. EU regulation now requires Google to verify political ad status across all active campaigns, and advertisers who don’t act before the deadline could face compliance issues.
What’s happening. Google is asking every advertiser to declare whether their existing campaigns include EU political ads. The requirement applies to all current campaigns and must be completed by March 31, 2026.
How to comply: Google has outlined three ways to submit the confirmation:
Campaign level — Go to Campaign Settings and select “EU political ads” to confirm individual campaigns.
Multiple campaigns — Go to the Campaigns tab and use the “EU political ads” option to confirm several at once.
Account level — Confirm for all new and existing campaigns in one go. Selecting “No” at account level automatically applies that answer to every campaign, including future ones. You can still override this for individual campaigns at any time.
Between the lines. The account-level option is the most efficient route for most advertisers who are confident none of their campaigns fall under the EU political ads definition. Google has made it straightforward to reverse or adjust the selection at any point, so there’s no risk in acting early.
The bottom line. Check your inbox — Google is contacting advertisers directly. If you run campaigns targeting EU audiences, log in and complete the confirmation before March 31st to stay compliant.
First seen. This update was spotted by Paid Search expert, Arpan Banerjee, who shared the details of the comms on Linkedin.
Until a few years ago, schema helped search engines extract basic facts and display visual enhancements like star ratings and sitelinks.
However, in the AI-driven search world, schema plays a different and fundamental role for local SEO, helping Google and other AI systems understand who you are, what you do, where you operate, and how confidently your information can be reused.
Improving rankings isn’t as relevant. Now, schema helps reduce confusion for Google and reinforces your business as a stable, trustworthy local entity across traditional search, local packs, AI Overviews, rich results, and external AI platforms.
Let’s dig into how schema helps local SEO in the AI search world.
How Google handles conflicting structured data
Google triangulates across multiple data points to understand a business and pull information into a search result:
On-page content.
Internal linking and site structure.
Google Business Profiles.
Citations and directories.
Reviews and reputation signals.
Schema markup.
When these signals align, Google’s confidence in your information increases. When they contradict each other, your correct information might not be pulled into search.
When structured data contradicts on-page content, Google Business Profile data, citations, or reviews, Google doesn’t attempt to reconcile the difference — it discounts the markup and often ignores the information altogether.
For example, consider a law firm that marks up:
Operating hours that differ from their GBP.
“Free consultation” in their schema, but not on the landing page.
Attorneys who are no longer listed on the “Our Team” page.
Each of these creates friction, leading to mixed signals for AI systems and search engines. One conflict may be ignored, but multiple conflicts can compound and result in lost search visibility for the whole site.
False positives occur when schema asserts something that isn’t fully supported by other signals.
Common examples include:
Marking a business as a medical provider without appropriate credentials.
Applying Person schema to non-professionals.
Using Product schema for services.
False positives are particularly damaging in AI-driven systems. AI models are conservative when confidence is low — if information appears inconsistent or exaggerated, it’s less likely to be reused or cited.
Review and rating schema
When review markup contradicts visible content, Google doesn’t “average” the signals, it ignores the schema altogether.
If you markup “5 stars” but your Google Business Profile shows “4.2 stars,” or if you mark up reviews that aren’t visible on the page, the signal gets confused.
Note: Google strictly prohibits marking up third-party reviews, such as those from Yelp, Google Maps, or Avvo, as your own Review schema. You can only markup reviews that are first-party, or collected directly by your site, and clearly visible to the user. For details, refer to Google’s specific guidelines on Self-Serving Reviews.
How other AI platforms use schema
Google is the most prominent platform, but AI is also integrated into assistants, such as Siri or Alexa, retrieval-based platforms, such as ChatGPT search, and much more.
To pull information, they need to determine if:
Two references describe the same business.
Information is current.
A source is authoritative.
While external AI platforms do not necessarily parse schema the same way Google does, structured data contributes to clearer entity representation across the web.
Importantly, these other systems tend to be less forgiving than Google when data is inconsistent. But if confidence in the entity is low, the business will be excluded from search.
What is the search environment for local businesses now?
To understand why schema matters more now than it did five years ago, it’s important to understand how fragmented search has become.
Local businesses no longer only surface in a single list of 10 blue links (the SERP). They appear across multiple interfaces, often simultaneously:
Traditional organic search results.
Local packs and Maps results.
Knowledge panels.
Rich results and enhanced listings.
AI Overviews.
Conversational and agent-based AI platforms.
Schema doesn’t guarantee visibility on any platform — it helps AI systems decide if your business information is reliable enough to reuse.
For example, when Google generates an AI Overview, it synthesizes information from multiple sources. Schema helps ensure Google understands exactly who you are and how your business information connects to your services, locations, and employees, so that your target audience can find you.
New SEO metrics for local businesses
Site performance is still often measured using metrics like keyword rankings, organic traffic, and conversions. These metrics aren’t wrong, but they are incomplete.
Local businesses now need to think about:
Visibility in AI Overviews and AI-generated answers.
Stability in the local pack over time.
Accuracy and persistence in knowledge panels.
Correct attribution when AI systems summarize local providers.
Reduced volatility during core and local algorithm updates.
If a local service business appears more frequently in AI-generated answers for informational and service-related queries, their brand visibility will improve, but they may see organic clicks stagnate or decline.
But there’s no need for panic.
In reality, what is happening is a shift in how demand is being fulfilled. In these scenarios, schema doesn’t create visibility. What it does is help ensure the business is represented accurately when it’s surfaced.
For local service-based businesses, a limited set of schema types is all you need to give your business visibility. Implementing too many types can lead to a bloated, templated markup that introduces contradictions.
Let’s look at an example law firm and how they might implement different types of schema.
Subtype schema
Subtypes help Google and AI systems categorize businesses correctly and align them with the right expectations. A personal injury firm, a corporate law practice, and a family law mediator should not all be described the same way.
Effective LegalService schema should clearly answer four questions:
Who the firm is.
What type of law they practice.
Where they operate.
How they can be contacted.
This markup aligns directly with what users see on the page, what exists in Google Business Profiles, and what appears in legal directories like Avvo or Martindale-Hubbell.
Organization schema defines the parent entity behind locations, practitioners, and services. LocalBusiness (or LegalService) defines the physical location. This distinction becomes critical as companies scale, rebrand, or operate across multiple markets.
Without a clear Organization layer, Google may treat each location as a standalone entity. That can lead to fragmented knowledge panels, inconsistent brand attribution, and inaccurate AI citations.
For legal and professional service businesses, Person schema reinforces expertise and real-world credibility (E-E-A-T). Used incorrectly, it creates false authority signals that Google will ignore.
Person schema should only be applied when:
The professional has a visible bio on the site
Bar admissions and credentials are clearly displayed
Their relationship to the firm is real and current
This helps Google and AI systems associate legal expertise with the firm rather than just its content. It also reduces the risk of misattribution when AI systems summarize legal advice.
For law firms, consultants, and agencies, Service schema, particularly the OfferCatalog structure, is more appropriate and accurate than Product.
Using OfferCatalog allows you to create a “menu” of services that AI systems can parse to understand the breadth of your expertise. This helps AI systems understand what the business actually offers without overreaching.
Example: OfferCatalog for legal services
{
"@context": "https://schema.org",
"@type": "LegalService",
"@id": "https://www.example-law.com/locations/dallas/#location",
"hasOfferCatalog": {
"@type": "OfferCatalog",
"name": "Legal Services",
"itemListElement": [
{
"@type": "Offer",
"itemOffered": {
"@type": "Service",
"name": "Personal Injury Consultation",
"description": "Free case evaluation for auto accidents and workplace injuries."
}
},
{
"@type": "Offer",
"itemOffered": {
"@type": "Service",
"name": "Medical Malpractice Litigation",
"description": "Representation for victims of surgical errors and misdiagnosis."
}
}
]
}
}
FAQPage schema
Originally, FAQPage schema helped search engines understand common questions and answers on a page. In an AI-driven search environment, well-written FAQs help define what a business does, what it doesn’t do, and what a user should expect. It helps AI systems as they look for boundaries, clarification, and intent resolution.
Example: AI-aligned FAQ schema
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "Do I have to pay a retainer for a personal injury case?",
"acceptedAnswer": {
"@type": "Answer",
"text": "No. We operate on a contingency fee basis, meaning you only pay legal fees if we win a settlement or verdict for you."
}
}
]
}
In AI Overviews, these answers may be paraphrased or summarized, but schema helps ensure the underlying meaning remains intact.
Schema maintenance: Why ‘set it and forget it’ fails
Schema is often implemented during a site launch or redesign, only to be ignored afterward.
But businesses change constantly. Hours shift, locations open or close, staff turnover occurs, and services evolve. When schema isn’t updated to reflect these changes, inconsistencies are introduced that can erode information signals over time.
A sustainable schema strategy involves two steps:
Quarterly audit: Set a recurring calendar reminder to audit your schema code against your live site. Check for syntax errors, broken @id references, and deprecated properties.
Trigger-based updates: Establish a rule that whenever a “fact” changes in your business (e.g., you update your holiday hours on your Google Business Profile, or a partner leaves the firm), the schema should be updatedimmediately.
Structured data now acts as a trust signal, helping search engines and AI systems determine whether business information is accurate, consistent, and reliable enough to reuse at scale.
Schema that reinforces your correct information supports visibility across traditional search, local results, and AI-driven experiences. Inaccurate or outdated schema can hurt your company’s visibility.
Do you think you’re able to answer the question every marketing leader dreads hearing from leadership: “Why isn’t our marketing effort doing more?”
How do you even go about answering that?
Let’s look at what I mean using a fictional location analytics company we’ll call Acme Area Analytics.
The Acme team reviews its reports. Nothing appears broken. Campaigns are running, leads are still coming in, and performance metrics are mostly stable. Yet sales momentum isn’t clearly accelerating, and it’s hard to pinpoint why.
Insights are scattered across site analytics, brand monitoring and SEO tools, CRM systems, and paid media dashboards. Each platform reflects part of the story, but none shows the full picture.
That fragmentation is exactly how well-intentioned “data-driven decisions” can go wrong. Let’s look at how that happens and how Acme, and you, can fix it.
When the data points in the wrong direction
In global, multi-channel campaigns like Acme Area Analytics’, the hardest moments are when nothing is obviously underperforming. Digital channels are running. Leads are coming in, and metrics are mostly stable, yet sales momentum is stalled and it’s unclear which lever to pull next.
At the same time, subtle signals raise concerns. Non-brand CPCs are creeping upward, and a competitor — Spotter Intelligence — is suddenly appearing more frequently in branded search.
Let’s say you’re part of the Acme marketing team. You go back to your reports and ask the question most marketers ask in this situation: Which tactic is underperforming?
When diving into the platform data, you uncover what looks like a clear answer: remarketing performance for your API has softened, conversion rates have dipped slightly, and efficiency has begun to decline.
On the surface, you have your answer. Spend should be pulled back to match demand because audiences have likely seen the creative too many times.
That decision could certainly make sense, and it’s what many teams actually end up doing. But it’s also often wrong. Why? Because you haven’t yet asked the right question.
The more useful question is harder to answer: “Is demand actually declining, or are we failing to create new interest upstream?”
The real issue becomes clear when you look beyond a single channel. The location analytics market still had strong growth potential, but your product was encountering a shortage of engaged audiences receptive to the message. That disconnect became clearer when you looked beyond paid media.
Site engagement trends in analytics and brand search behavior in Search Console suggested interest in your type of location AI wasn’t disappearing. It just wasn’t converting yet.
The focus had shifted from reach to engaged awareness, with a priority on attention and engagement, not just exposure. So your Acme team decided to introduce additional campaign layers, including new content designed to build relevance and trust.
Crucially, you didn’t see any improvement right away. Cost-per-lead efficiency continued to decline, and it looked worse after increased upper-funnel investment. From a platform-only view, this looked like the time to pull back.
But looking across systems changed how performance was interpreted. Engagement from awareness activity began feeding remarketing pools, but the impact wouldn’t surface immediately for a product with long sales cycles like your API.
During that gap, the Acme team maintained confidence in its strategy by sharing early signs of upstream momentum. Only later did results begin to show up. Remarketing efficiency improved and higher sales volumes of the API were confirmed from integrated CRM data.
The takeaway for the Acme Area Analytics marketing team wasn’t just that “remarketing worked again,” or that upper funnel activity drives demand. It’s that the hardest marketing decisions are the ones you have to make — and hold — before success shows up in the metrics leadership typically trusts.
In our Acme example, each dashboard told a technically accurate story, but no single dashboard could fully articulate the whole picture.
Paid media dashboards reflected efficiency trends.
Analytics and Search Console showed shifts in engagement and demand.
CRM data lagged behind decisions by weeks or months.
Looking at any of those in a silo wouldn’t have allowed Acme’s marketing team to fully understand what was happening.
But we know that the insight didn’t live in any single view. When the question the team asked itself shifted to whether demand was moving effectively through the funnel, and dashboards were evaluated together in context, the decision changed.
This is what unsiloed analytics looks like in practice. It’s not about teams fighting over which touch led to the result, but recognizing that each part of a marketing plan plays a distinct and important role in creating momentum that grows demand and lifts sales.
Leadership wants proof. Pipeline and revenue might feel like the safest validation. But in complex, multi-channel programs, those are often lagging indicators of solid performance.
By the time pipeline clearly reflects demand creation, teams have often already pulled back awareness investment, cut channels that looked inefficient in isolation, and shifted budget toward short-term demand capture.
In the example above, waiting for proof would have meant that Acme reduced awareness and remarketing spend and possibly exited a market that would later show great promise.
Integrated data didn’t eliminate the risk of shifting investment from lead generation to awareness-building in a market that had declining metrics. Instead, it added credibility to the case for doing so.
This dynamic isn’t limited to complex, multi-channel programs. You can see it even within a single platform when multiple tactics work together.
Let’s look at a scenario where Acme’s brand search impression volume increased by roughly 50% year over year while Share of Voice remained flat. That means more people have been searching for Acme as the company has invested across out-of-home and other digital campaigns. Acme’s Google campaign then harvested the demand created by other channels.
If Acme’s brand search had been evaluated only in terms of its media plan efficiency, this signal of growing demand would have been easy to miss. In context, it confirmed that Acme’s awareness efforts were working, even though attribution couldn’t perfectly assign credit to individual channels.
What changes when data is integrated
In these examples, integrated data — unsiloed data — shifted the conversation.
Instead of Acme’s marketing teams debating budget cuts, they could monitor signs of early momentum, including longer time on site and rising brand search volume. Over time, that interest could be seen in the CRM as higher-quality leads that converted more frequently into closed deals.
The good news is that this doesn’t require new tools or perfectly stitched together data. It simply requires stepping back during planning and asking better questions about how potential customers signal interest as they consider your product.
In my experience, the most valuable marketing insights come from understanding how different data points relate.
Unsiloing your data isn’t about proving causality or winning attribution debates. Instead, it’s about recognizing opportunity early enough to act on it and identifying which metrics suggest that demand is quietly being built in the background.
The teams that win aren’t only better at reporting results. They’re better at seeing momentum while it’s still forming and acting on it early.
If I hear “always be testing” one more time, I might scream. It was great advice in 2016. In 2026, it’s a great way to light your budget on fire.
That mantra made sense when budgets were loose and platforms forgave a lot of chaos. Launch five audience tests simultaneously? Sure, why not! Swap out three creative variables at once? Go for it!
But the rules have changed. Our new reality has tighter budgets, longer learning phases, and signal fragmentation everywhere. One poorly structured test can distort your performance for weeks, not days. That performance hit compounds fast.
Modern experimentation is expensive and risky. Why pay that price when we have the power of agentic AI to help? And by help, I don’t mean slapping AI onto our existing process and asking it to generate more ad variants. That would just be an expedient way to light our budgets on fire.
Instead, it’s time to use agentic AI to design smarter experimentation systems.
The real cost of unstructured testing
In an “always be testing” era, it was all too easy to throw things to test at the scale Oprah gives out cars or Taylor Swift fills auditoriums. It often led to unstructured testing where we launched ideas on a Monday and checked results on Friday hoping for a lift. There was nary a risk model, overlap detection, or strategic sequencing in sight.
The costs of that approach are now exponentially higher. Take platform disruption. Algorithms crave stability. Industry benchmarks show ad sets stuck in learning phases often see CPAs 20-40% higher than stable sets.
Every time you significantly change creative, audience, or budget, you risk resetting that learning. If you’re running three overlapping tests that each trigger resets, you’re voluntarily paying a volatility tax on your entire media spend.
Then there’s waste. The majority of A/B tests deliver no statistically significant lift. If you aren’t ruthless about what deserves to run, you’re burning budget to prove most ideas don’t matter. “Always be testing” without guardrails turns into “always be destabilizing.”
From random tests to a real experimentation engine
The shift looks like this. Old approach: “AI, write me 10 new headlines.” New approach: “AI, design the smartest next experiment within our budget, risk tolerance, and current learning state.”
The reframe from creative generation to experimentation architecture is where real leverage lives.
Here’s a practical seven-step framework to turn testing from a tactical habit into strategic infrastructure.
Step 1: Set hard guardrails (humans draw the lines)
Before you let any AI near your experiments, lock in constraints. Without them, AI lacks proper context. With them, AI becomes a disciplined strategic partner.
Define and document five hard boundaries.
Budget allocation: Reserve a fixed percentage (e.g., 10%) explicitly for testing.
Maximum volatility: “No test can increase CPA by more than 15% for more than 5 days.”
Learning phase sensitivity: Document reset thresholds per platform.
Leading indicators: Use early signals (CTR, engagement drop-offs) to kill bad tests before they damage pipeline.
Brand risk: Define off-limits positioning (e.g., no discount-heavy testing in enterprise segments).
Document this in a single file (e.g., experimentation-guardrails.md) to teach AI the constraints that make ideas viable. Your AI agent must reference this before proposing any test.
Step 2: Let AI audit your experiment history
Most teams have the data sitting in spreadsheets, but never extract the lessons. Feed your last six months of test results into an AI agent and have it analyze variables changed, duration, performance delta, statistical confidence, and platform resets.
Ask it to find patterns, such as:
Over-tested variables: CTA buttons tested eight times with zero meaningful lift? That’s not a lever.
False failures: Many tests are declared losers simply because they never reached statistical significance. An AI agent can quickly assess statistical power and flag inconclusive results.
Volatility patterns: Often, your worst CPA weeks weren’t market shifts or a single bad creative, but rather the weeks where you launched three overlapping tests.
This is how AI becomes a true analytical partner.
Step 3: Write real hypotheses
Rather than jumping straight from idea to launch, use AI to help you enforce hypothesis discipline.
Weak: “Let’s test a new headline.”
Strong: “If we emphasize ‘faster time-to-value’ over ‘ease of use,’ we expect a 10-5% lift in demo requests from mid-market companies because win/loss analysis shows speed is their top decision criterion.”
Structured hypotheses create institutional memory. Six months later, when someone suggests testing “speed messaging” again, you’ll know exactly who it worked for and why. Yes, it feels like paperwork, but this discipline can protect your budget from algorithm chaos.
Step 4: Risk-score every proposed test
Budget isn’t infinite and neither is algorithm stability. Your AI agent should evaluate each proposed test across five dimensions and assign a risk score.
Budget impact (e.g., <5% vs >15%).
Algorithm disruption level (minor refresh vs new campaign).
Audience overlap.
Brand sensitivity.
Learning value.
High risk + low learning = Kill it. Low risk + high insight = Green light.
Example: Testing a radical new enterprise positioning statement is high risk in a paid conversion campaign. Instead, your AI agent might suggest validating it first via organic LinkedIn content or low-budget audience polling. Low risk. High signal.
This is one of the most underused applications of AI in experimentation. Synthetic testing means simulating how different personas may react to messaging before spending media dollars, and the data backs it up.
A study involving researchers from Stanford and Google DeepMind found that digital agents trained on interview data matched human survey responses with 85% accuracy and mimicked social behavior with 98% correlation.
This makes synthetic audiences surprisingly useful for early-stage signal gathering. While they don’t replace real-world data (at least not yet), they can act as creative QA.
Here’s how it works. Define psychographic archetypes.
The Skeptical CMO (burned by vendors, risk-sensitive).
The Growth VP (speed-obsessed).
The CFO (margin-focused).
Feed your proposed messaging into your AI system and ask, “How would the Skeptical CMO react to this?”
You might get feedback like: “The phrase ‘All-in-One’ triggers skepticism. It signals feature bloat. Consider reframing as ‘Integrated’ or ‘Modular.’”
That kind of signal costs pennies in API calls instead of thousands in paid testing.
Step 6: Sequence tests, don’t stack them
Changing audience, creative, and landing page in the same week teaches you almost nothing. Your AI agent should act like air traffic control: scan active campaigns, flag conflicts, and recommend sequencing.
A better flow:
Week 1-2: Audience test.
Week 3-4: Creative test on the winning audience.
If overlap is unavoidable, enforce clean holdout groups so you always have a source of truth.
Step 7: Build a living knowledge base
Treat tests like disposable experiments and you lose the compounding value. Have your AI auto-summarize every completed test:
Why did it win?
Who did it win with?
How durable was the lift?
What variables interacted?
Over time, this database becomes your moat. Everyone can buy the same targeting. Few teams have 100+ validated customer truths at their fingertips.
“Always be testing” was a growth-era mindset. In 2026, the winning mindset is “always be compounding intelligence.”
Rather than more tests, build your competitive advantage through structured, risk-aware, insight-driven experimentation that protects algorithm stability and ties experimentation directly to revenue.
The next time your stakeholder asks why you aren’t testing more, show them your experimentation architecture and say, “We’re not just running experiments. We’re building an intelligence engine.”
Video advertising has never been easier to distribute. Platforms can deliver impressions and views at an enormous scale across YouTube, paid social, short-form video, and connected TV.
But distribution isn’t the same as effectiveness. Many campaigns generate impressive platform metrics while producing little measurable business impact.
The problem usually isn’t targeting, budget, or platform choice. It’s a deeper strategic issue: campaigns are optimized for outputs like views and impressions rather than outcomes like attention, persuasion, and action.
Most video ads fail because they misunderstand attention
Poor targeting, limited budgets, and platform choice are rarely the real problem. The bigger issue is that many video ads are still produced as if they’re television commercials.
In the early days of online video, distribution was the challenge. Getting a video seen at all felt like a win. Today, distribution is abundant. Attention isn’t.
Every major platform — YouTube, paid social, short-form video, connected TV — competes for fragments of cognitive bandwidth. Users arrive with intent, habits, and expectations that have nothing to do with your campaign. We plan for reach, while viewers respond to relevance.
I’ve sat in many meetings where success was defined by impressions delivered or views accrued. But when you look downstream — search lift, site engagement, conversion — the connection often disappears.
Platforms will reliably deliver impressions. Turning those impressions into memory, persuasion, or action requires a fundamentally different mindset.
Skippable formats changed video advertising permanently, but many advertisers still haven’t adjusted creatively.
Early in my career, I believed strongly in branding up front. Logos, product shots, music cues — everything that signaled professionalism. Those ads looked great in presentations. They underperformed in market.
A clear pattern emerged over time. Ads that opened with a recognizable problem, a provocative statement, or an unexpected visual held attention longer — even when branding appeared later. Ads that opened with branding signals were skipped almost reflexively.
View-through rate isn’t persuasion. A “view” simply means the platform’s minimum threshold was met. It doesn’t mean the message landed, the brand registered, or the viewer cared.
In multiple brand lift analyses, most measurable impact occurred before the skip button appeared. If the opening didn’t earn attention, the rest of the ad didn’t matter.
What works: treat the opening frame like a headline, not a preamble. Lead with tension, a question, or a familiar problem. Design for sound-off environments. If the first frame wouldn’t stop a scroll, nothing that follows will matter.
Higher production value often correlates with lower performance
One of the most counterintuitive lessons in modern video advertising: polished ads frequently underperform scrappier ones.
I’ve seen simple, phone-shot videos outperform meticulously produced studio spots across YouTube, paid social, and short-form platforms. Not because quality doesn’t matter — but because perceived authenticity matters more.
Audiences are exceptionally good at identifying advertising. When something looks like an ad, they disengage. When it looks like content, they give it a chance.
Algorithms reinforce this: they reward watch time, retention, rewatches, and shares. They do not reward lighting setups or production budgets.
I’ve seen brands “upgrade” social video to look more premium, only to watch performance decline. The creative looked better. The results were worse.
The goal isn’t to look amateurish. It’s to look like you belong.
Match the platform’s visual grammar. Prioritize clarity over polish. Use real people and authentic voices whenever possible.
Ads that feel native get watched. Ads that feel inserted get skipped.
Length is a creative decision, not a media constraint
“Shorter is better” is one of the most persistent — and misleading — rules in video advertising.
Six-second ads can work. So can 60-second ads. I’ve seen both exceed expectations, and I’ve seen both fail badly. The difference was never duration — it was justification.
Some messages can be delivered instantly. Others require context, proof, or emotional buildup. Forcing every idea into the same runtime produces predictable results: safe, bland, forgettable ads.
I’ve reviewed retention graphs where a 45-second ad held viewers longer than a 15-second version, because the story justified its length. I’ve also seen six-second ads lose half their audience in the first two seconds because they wasted the opening.
Test multiple edits, not just multiple lengths. Watch retention curves, not averages. Build modular narratives: hook, then value, then proof, then action.
The “right” length is however long it takes to make the viewer feel their time was respected.
Metrics are signals
Platforms provide more data than ever. The problem isn’t a lack of metrics. It’s confusing metrics with outcomes.
I’ve seen campaigns praised for high completion rates that produced no measurable business impact. Strong engagement coexisting with low conversion. Impressive view counts that delivered zero lift.
This happens because platforms optimize for their success metrics, not yours. If your goal is to maximize views, the platform can do that easily. If your goal is to influence consideration, preference, or action, things get more complicated.
One uncomfortable question I’ve learned to ask early: what would failure look like here? If the answer is vague, the campaign is already at risk.
Define success in business terms before launch. Tie video metrics to downstream behavior wherever possible. Use lift studies, holdouts, or assisted conversions when they’re available. If you’re running a brand-building campaign, measure brand lift. If you’re running a performance campaign, measure conversions.
Creative is often blamed when video ads underperform. In reality, creative usually does exactly what it was asked to do. The problem is the brief.
Vague objectives produce generic ads. “Brand awareness” without context leads to unfocused messaging. “Make it engaging” isn’t a strategy.
Strong video ads almost always begin with clear answers to three questions:
Who is this really for?
What do they care about right now?
What should they think, feel, or do differently after watching?
When those answers are clear, creative decisions become easier. When they aren’t, the work is compromised before production begins.
The deeper diagnostic questions are worth keeping close:
Are viewers actually paying attention, or just passively present?
What are they feeling — and which specific creative choices are driving that response?
Will they remember the brand once the ad ends?
What will they do next — share it, recommend it, search for the product, or buy?
I’ve seen entire campaigns improve simply because the brief forced alignment around audience insight rather than assumptions.
Distribution strategy is part of the creative
Another common mistake is treating creative and distribution as separate decisions. They aren’t.
The way an ad is consumed — fullscreen versus feed, sound-on versus sound-off, lean-back versus lean-forward — should shape how it’s made.
A video designed for connected TV shouldn’t simply be resized for mobile. A short-form ad shouldn’t be a truncated long-form story without rethinking the hook entirely.
I’ve seen strong ideas underperform because the creative didn’t match the placement. The concept wasn’t wrong. The context was.
Design with placement in mind from the start. Create platform-specific versions, not one-size-fits-all assets.
Accept that “reuse” often means “rethink,” not “repurpose.” Distribution constraints aren’t limitations — they’re creative inputs.
Testing should answer questions, not just generate variants
Testing is indispensable. It’s also frequently misunderstood.
Running endless A/B tests without a hypothesis rarely produces insight. It produces noise.
The most effective testing focuses on variables that materially affect attention and comprehension: opening frames, narrative structure, on-screen text versus voiceover, proof points versus emotional appeals.
It’s also important to recognize what testing can’t do. Algorithms are excellent at optimizing toward measurable signals. They don’t understand brand equity, long-term memory, or cumulative effect. Testing should inform judgment — not replace it.
Ultimately, the only thing that matters for creative effectiveness tools is whether their predictions actually correlate to real media and sales outcomes — reliably enough to inform strategy and media decisions.
The question worth asking of any such tool is simple: How often does what it predicts will happen actually happen?
For example, I frequently cite data from DAIVID, an AI-driven creative effectiveness platform. Why? Because in independent testing, DAIVID’s predictions aligned with real-world outcomes more than 80% of the time — a meaningful foundation for making creative decisions with greater confidence before a campaign goes live.
Platforms will change. Formats will evolve. Algorithms will shift in opaque and sometimes frustrating ways. But attention, curiosity, and trust remain stubbornly human.
The best video ads I’ve worked on weren’t optimized for view counts or completion rates. They were optimized for relevance. They respected the viewer’s time. They said something worth hearing.
Video ads don’t succeed because they follow platform rules. They succeed because they understand people. And that principle outlasts every algorithm update.
Google AI Max drives revenue but at a higher cost, according to Smarter Ecommerce’s Mike Ryan, who analyzed 250+ campaigns. Outcomes vary, and much more testing is still needed.
Why we care. AI Max isn’t a minor update. It’s Google’s most significant reimagining of Search campaigns in years, shifting away from keyword syntax toward pure intent matching. For you, that’s both an opportunity (possible growth) and a risk (an efficiency tradeoff).
By the numbers. The result of the analysis:
Median revenue: +13%
Median CPA: +16%
ROAS range: +42% to -35%
Advertisers who activate AI Max typically see 14% more conversions or conversion value at a similar CPA or ROAS, rising to 27% for campaigns still relying on exact and phrase match keywords, Google says.
Turning on AI Max is essentially a coin toss: you may see a lift, but efficiency likely won’t follow, Ryan concluded
What AI Max actually is. Rather than forcing Search campaigns into Performance Max, Google went the other direction — bringing PMax-style automation into classic Search. The result is three core features:
Search Term Matching (broad match expansion plus keywordless targeting),
Text Customization (dynamic ad copy), and
Final URL Expansion (automated landing page selection).
Four pitfalls Smarter Ecommerce identified:
Broad match cannibalization: Up to 63% of the time, recycling existing coverage rather than finding new queries.
Competitor hijacking: In one account, AI Max scaled so aggressively into competitor brand terms that it consumed 69% of total Search impressions.
Reporting overload: Search term and ad combination reports can run to tens of thousands of rows, making manual auditing nearly impossible without automation.
Search Partner Network blowouts: One campaign saw half a million monthly impressions land on SPN at a 0.07% conversion rate, versus 3.04% on standard Google Search.
Between the lines. Google’s 14% uplift stat conspicuously excludes retail — an omission Ryan flags as significant for ecommerce advertisers. There’s also a deeper irony: you’re most likely to adopt AI Max if you’re already running Broad Match, DSA, and PMax — yet Google says those accounts will see the lowest incremental benefit.
What’s next. In a conversation with Ryan, Google Ads Liaison Ginny Marvin confirmed that Google plans to deprecate Dynamic Search Ads and migrate the technology into AI Max for Search. No firm timeline was given, though past Google deprecations often run about a year from announcement.
Ryan recommends activating AI Max’s keywordless features in your existing Search campaigns now and beginning to wind down DSA — not migrating it to PMax.
Ryan’s verdict is cautious optimism. About 16% of advertisers are testing AI Max, and few have gone all in. Start small, audit aggressively, and don’t let FOMO around AI Overviews drive your decision.