The new ChatGPT ad format is standardizing, according to a new Adthena analysis of 40,000+ daily placements. What once felt experimental is becoming a disciplined, high-intent system for users already deep in decision mode.
The big picture:ChatGPT ads are converging on a short, structured, highly contextual style that favors precision over persuasion and utility over storytelling, marking a shift from creative-led advertising to real-time, intent-driven assistance.
By the numbers. Every word must carry weight and contribute directly to clarity or conversion:
The average headline clocks in at just 30 characters and around 5 words.
Body copy averages 116 characters and roughly 19 words.
What’s working. The dominant pattern is a “Brand: Benefit” headline, separating the name from a specific value. It works because users in conversational environments expect immediate clarity, not intrigue or ambiguity.
Almost every ad leads with the brand name. You need easy recall in a setting where users are already evaluating options, not discovering them.
Headlines are compressed. Headlines often read like functional labels rather than slogans. This brevity carries into the body copy. It typically uses two tight sentences: a proof point followed by an offer or nudge, showing you’re not trying to win an argument but give one compelling reason to act.
Context mirroring is a defining feature. The strongest ads directly reflect the user’s query or situation, signaling real-time tailoring. This marks a new level of AI-native targeting that goes beyond keyword matching into conversational relevance.
Concrete value signals carry outsized weight. Dollar signs and specific numbers — prices, savings, performance — consistently outperform vague claims. Numbers dominate body copy because they feel credible and native in a setting where you’re actively researching and comparing options.
Offers. Low-friction offers — especially “free” trials or demos — are the most common conversion lever, reducing commitment barriers while users are exploring.
Calls to action. These are explicit and action-oriented, favoring direct phrases like “Shop now,” “Compare,” or “Book” while abandoning generic prompts like “Learn more.”
The overall tone. Calm, confident, and measured, with minimal exclamation points or question marks. It aligns more with helpful guidance than ad hype, helping ads blend into the conversational flow rather than disrupt it.
Why we care. ChatGPT ads reach users at high intent, where clarity and relevance matter more than creativity or storytelling. In a conversational environment, ads compete with useful answers, so vague or overly branded messages get ignored while precise, value-driven copy performs better. This shift rewards short, structured messaging and gives early adopters an advantage as the format standardizes.
Between the lines. While ChatGPT ads share DNA with paid search — especially in their focus on intent and relevance — they differ by integrating into dialogue, responding to high-intent users, and delivering messaging that feels assistive rather than interruptive.
The takeaway. Success in ChatGPT advertising depends on precision, relevance, and credibility over creativity, emotional appeal, or brand-led storytelling. The winning strategy: fit in perfectly when a user needs a clear, trustworthy answer.
The analysis. Adthena CMO Alex Fletcher shared the data on LinkedIn.
There’s a flood coming. A downpour of noise — more content, more channels, more AI-generated everything, moving faster than most teams can keep up with. Somewhere in that volume, your customers are quietly drowning — overwhelmed, underserved, and one bad experience away from choosing someone else.
You’ve probably felt it on your team, too. Another tool. Another sprint. Another quarter of doing more with less. The productivity metrics look fine from the outside. But inside, people are running on empty.
There’s an old story about a man named Noah who, facing catastrophic disruption, didn’t freeze or panic. He didn’t look for shortcuts or try to outswim the storm. He built — with intention, with a clear design, and with people he trusted. When the waters rose, the ark held.
The brands that lead don’t adopt the most technology the fastest. They build with intention — designing systems and experiences that protect people.
What follows is the case for building your ark — and a practical framework to do it.
AI power users report that it makes their overwhelming workload more manageable (92%), boosts creativity (92%), and helps them focus on their most important work (93%), per Microsoft and LinkedIn’s Work Trend Index,.
Yet, 60% of leaders say their company lacks a concrete AI vision or plan — meaning the very tool that could relieve team burnout is sitting underutilized.
That gap shows up in real ways.
For customers, it creates friction — too many choices, unclear navigation, and messaging that misses where they are. They arrive with a question and leave with more confusion. They don’t feel seen or helped.
For marketing teams, the impact is quieter but just as serious:
Decision fatigue disguised as strategy.
Tool overload framed as innovation.
Burnout that looks like productivity — until it doesn’t.
Fragmented workflows that drain energy faster than they produce results.
Brands that recognize these human issues move faster, retain stronger talent, build deeper customer loyalty, and drive better business outcomes. Enter what I call the wellness sweet spot.
The wellness sweet spot is the moment where AI, empathy, and human-first design converge — creating conditions where both your customers and your team can think clearly, act confidently, and trust the experience they’re in.
It’s an architectural decision about how your entire marketing ecosystem is designed to make people feel. When its three pillars are genuinely working together, four things become true simultaneously:
AI reduces waste and cognitive load in the experience — making things simpler.
Emotional friction is intentionally minimized at every touchpoint.
Marketing teams operate from a foundation of wellness (and well-being).
Systems and workflows support human thriving, not just throughput.
When these conditions are in place, something shifts. AI stops feeling like a disruption and starts working as a stabilizing layer — supporting, protecting, and quietly holding the system together. It manages the overwhelm. The ark keeps floating.
Most marketing leaders still think about AI in terms of what it does — automate, generate, optimize, analyze. Those outcomes matter, but they don’t tell the full story. The more consequential question is how AI makes people feel while it’s doing those things.
For customers, AI used well is a guide that:
Summarizes complexity without dumbing it down.
Narrows choices in ways that feel helpful rather than manipulative.
Anticipates what someone needs next and removes ambiguity from decision paths.
Saves time — which is, in a very real sense, saving emotional energy.
For teams, thoughtfully deployed AI absorbs the work that depletes people most: the repetitive, the reactive, and the administrative. It creates space for what human brains do best: strategy, creativity, relationship-building, and nuanced judgment.
When you build your marketing systems around it, the output quality goes up because the people producing it aren’t running on fumes.
This is empathy at scale. Not the kind that lives in a tagline, but the kind that’s baked into how your systems are structured and how your content is designed to reach people.
The new emotional metrics: What to measure when you start caring about feelings
This is where things get practical and start to move ahead of the curve. Most marketing dashboards show what happened — click-through rates, conversion rates, and time on page. Those metrics matter, but they don’t explain why someone left or how they felt along the way.
Emotional metrics help fill that gap by focusing on the conditions under which decisions are made. Research in psychology and neuroscience shows that people make better decisions, build stronger brand relationships, and become more loyal when they feel clear, confident, and calm.
Here’s how traditional metrics map to emotional KPIs:
Traditional metric
Emotional KPI
What it measures, reimagined
Time on page
Clarity index
How quickly someone finds what they need — without confusion
Conversion rate
Decision effort score
Cognitive load required to complete an action
Engagement rate
Customer calm markers
Behavioral signals of confidence, not stress (Qualified attention)
Team output volume
Wellness throughput
Strategic output produced with reduced burnout
These are upstream indicators that help explain downstream performance. A low clarity index often shows up as stalled conversion rates. A high decision effort score can lead to rising cart abandonment. Declining wellness throughput tends to result in average output from top strategists.
Brands that start tracking these now gain an advantage over those that wait to react.
5 steps to design toward your wellness sweet spot
A caution before the roadmap: more speed and scale applied to a broken system will not fix it. It will amplify everything that’s wrong with it. These five steps are meant to be done before you push harder on AI adoption.
Step 1: Run an empathy audit
Where are customers confused? Hesitating? Leaving? Map these moments using behavioral data combined with qualitative insight — customer interviews, session recordings, support tickets, search data. Focus less on what people clicked and more on where they felt lost.
Step 2: Simplify for cognitive ease
Fewer choices. Plain language. Cleaner navigation. Every step you remove from a decision path is a small act of respect for your customer’s mental energy. This is generous. It’s designing with intelligence.
Step 3: Use AI as a shepherd
Deploy AI to enhance orientation, clarity, and confidence. Don’t push aggressive automation or manufacture a sense of urgency. AI should make customers feel helped, not herded. There’s a difference, and your audience feels it.
Step 4: Rebuild team workflows around energy
Audit where your team’s cognitive energy actually goes each week. Identify the work that is routine, reactive, or repetitive — and build AI into those gaps first. Protect the hours that require human judgment, creativity, and relationship-building. Those are the hours that drive real growth.
Step 5: Measure the feels
Begin tracking emotional outcomes alongside performance metrics. Start simple: add a one-question post-interaction survey.
Review search data for confusion signals. For example, growing volume for “how do I” or “why can’t I” phrases on your own site may indicate your content isn’t answering questions before they’re asked.
Monitor support ticket themes for friction patterns. A perfect measurement system isn’t required to start. The intention to look is.
The future belongs to emotionally intelligent brands
In a market where nearly every brand claims to be customer-centric and frictionless, the real differentiator comes down to how people feel and whether systems consistently deliver on that promise.
Leading organizations don’t rely on bigger AI budgets. They align technology with clear intent, prioritize well-timed, empathy-led content over volume, treat customer well-being as part of the brand promise, and protect their teams’ energy as rigorously as performance.
Creating value starts with protecting the people who create it. Noah didn’t survive the flood by ignoring it or fearing it. He paid attention, took action, and built with intention — something designed to carry what mattered most: his people, his purpose, his peace, and his future. That’s the kind of leadership this moment calls for.
You don’t have to figure this out alone. The tools are here. The framework is yours. The decision is whether to build before the pressure hits or react once it’s already underway.
You’ve done everything right. You have a fast website with comprehensive content, pages ranking in the top 10, and a strong backlink profile. Yet when you search the query you rank for, your site doesn’t appear in Google’s corresponding AI Overview.
This is a retrieval problem, not a ranking issue. And the difference between the two is the most important shift SEOs need to understand right now.
AI Overviews don’t work like traditional organic rankings. Instead of considering which page has the most signals, AI Overviews look for the page that gives the cleanest, most usable answer.
If your content doesn’t meet that standard, your traditional search ranking is irrelevant. Here’s what’s going wrong, and how to fix it so your content appears in more AI Overviews.
The ranking-citation gap is real — and growing
The overlap between AI Overview citations and organic rankings grew from 32.3% to 54.5% between May 2024 and September 2025, according to a BrightEdge study.
This trend sounds encouraging. But it also means that even at peak convergence, nearly half of all AI Overview citations come from pages that don’t rank at the top of organic results. Google actively bypasses higher-ranking pages when it finds content that better serves the AI Overview format.
The pattern varies sharply by sector, though. BrightEdge data shows that in ecommerce, the overlap barely changed, remaining essentially flat over the entire 16-month period. And in your money or your life (YMYL) categories like healthcare, insurance, and education, the overlap between AI Overview citations and organic rankings ranges from 68% to 75%.
Ranking and visibility are no longer the same thing. You can rank second and be invisible. Or, you can rank on the second page and be the first thing a searcher reads.
1. Your content answers the wrong version of the question
Informational queries — specifically long-tail and conversational searches — typically trigger AI Overviews. Informational queries drive 57% of AI Overviews, while commercial queries trigger this AI feature far less frequently, according to Semrush research.
Google’s AI engine looks for content that matches what the user asks, not just the keyword you’ve targeted. So, an AI Overview answering the query “what’s the best way to manage a remote team’s workload?” probably won’t cite a page that ranks for the keyword “project management software” and leads with features and pricing.
2. You’ve buried the answer
If your introduction spends three paragraphs establishing context, warming up the reader, or restating the question before answering it, the retrieval system moves on. It seeks information it can extract cleanly. If that answer isn’t near the top of the page, the system skips that page.
3. Your structure is opaque to AI systems
Traditional SEO content is built around comprehensive long-form content: 3,000-word guides covering every angle of a topic, written for readers who scroll and skim.
AI retrieval systems don’t work the same way. They need to identify discrete, self-contained answers within your content.
That requires clear heading hierarchies, short paragraphs, and content that AI systems can extract. A section under a specific heading should completely answer the question posed in that heading, without requiring the surrounding context to make sense.
Content written as one long, unbroken narrative is harder for AI systems to parse. Even if every word is accurate and authoritative, it may not earn a citation if the structure doesn’t help the retrieval system identify individual answer units.
4. Your E-E-A-T signals aren’t visible at the content level
Google has been clear that experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals are important for content quality in traditional search. It likely matters for AI Overviews, too. But these signals need to appear in the content itself, not just in your domain profile or link graph.
Strong domain authority counts for less than you’d think if the content itself carries no credibility signals.
Who wrote it?
Where did the data come from?
Is there anything here that couldn’t have been written by someone who’d never worked in this field?
A retrieval system evaluating an individual page doesn’t know your domain’s track record. The page must make the case for itself.
Content-level E-E-A-T signals are particularly important in YMYL categories, where AI Overviews are selective about sources because the risk of misinformation is higher.
5. You’re targeting queries that don’t trigger AI Overviews
Before optimizing your content for AI engines, it’s worth checking whether your target queries trigger AI Overviews at all. As of late 2025, AI Overviews appear in 16% of search results, though that figure isn’t evenly distributed across query types.
Transactional queries, navigational searches, branded queries, and highly local searches are far less likely to trigger an AI Overview. If most of your traffic comes from commercial or transactional keywords, the lack of AI Overview citation may not be a content problem. It may simply be that those query types are less likely to generate overviews in the first place.
What the data tells us about the impact of this shift
The stakes are significant. Research by Seer Interactive shows that organic click-through rates (CTRs) for informational queries that displayed AI Overviews dropped 61%, from 1.76% to 0.61%, between June 2024 and September 2025. Paid CTR fell even further, from 19.7% to 6.34%.
But the same research reveals a critical asymmetry: Brands cited in AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than when they weren’t cited. A citation in an AI Overview doesn’t just protect you from a CTR decline. It actively amplifies your visibility.
The Pew Research Center’s study of searches by U.S. adults in March 2025 found that only 8% of users who encountered an AI Overview clicked a traditional search result, compared to 15% who clicked when no overview appeared. And 26% of searches with AI Overviews resulted in no clicks at all.
If AI Overviews appear for your most valuable queries and you aren’t cited, you aren’t just missing out on the overview. You’re losing clicks you previously received from the organic listing underneath it.
How to optimize for retrieval, not just rankings
These trends require you to adjust how you think about content structure and intent. Here’s where to focus:
Rewrite your introductions: Your first paragraph should directly and completely answer the primary question of the page. Save context and elaboration for later sections. Write as if the first 100 words of your page represent a standalone answer.
Restructure your headings: Each heading should be a question or a complete, specific claim. The following section should fully answer or support that heading without requiring the reader to review previous sections. Think of each section as a self-contained answer unit.
Add explicit expertise signals: Include author attribution with credentials, first-person experience language, original data, and links to primary sources and original research. These signals matter at the content level, not just at the domain level.
Audit your query triggers: Manually test your target queries in Google to see which ones actually generate AI Overviews. For those that do, study how the cited sources are structured, the length of the cited sections, and the format of the answer. Use that as your editorial brief.
Expand your topical coverage: AI Overviews favor sources that demonstrate breadth of knowledge across a topic, not just single-page depth. Focus on answering several related questions well instead of building one exceptional page surrounded by thin content.
What AI Overviews represent is something that’s been discussed for years, but few have truly prepared for: the separation of content quality from ranking signals.
For two decades, we used rankings as a proxy for quality. High-ranking content was, by definition, good enough.
But that assumption no longer holds. Ranking in traditional search indicates that your brand has authority and that your page is relevant to the search query. It says nothing about whether your content is structured in a way that AI retrieval systems can use.
Visibility now goes to whoever understands how AI systems identify, extract, and surface answers. A strong backlink profile won’t help you if the answer is buried on page three of a 4,000-word guide.
Ranking in the top 10 is still worth pursuing. But it’s no longer the whole game.
Your paid social operation is on fire. You know how your audience thinks, the creative process is dialed in, and the results get better every year. Leadership greenlights an expansion to Google Ads — a new channel and, critically, a new source of revenue.
As it turns out, applying that same strategy really just buys you an express ticket to a very difficult conversation.
Google rewards a different kind of thinking. Intent signals and campaign logic are different, and the mistakes that eat at your budget don’t always make themselves clear. Brands that apply their existing Meta playbook often find themselves looking at shiny dashboards and dull balance sheets.
These six common mistakes tend to do the most damage before anyone realizes what’s happening. They’re what we see most often when ecommerce brands come to us after making the move to Google — and they can all be reversed.
Mistake 1: Treating Google like a retention channel
You can definitely use Google Ads to support retention and brand defense. The problem is when that becomes your whole strategy.
We see this regularly with brands new to the platform who launch directly into Performance Max. Early ROAS looks strong, and everyone’s happy. But a few months in, someone asks the right question: Are we actually growing, or paying to capture purchases that were going to happen anyway?
One client we worked with came to us with branded search and retargeting doing the heavy lifting inside PMax – essentially a tax on demand that had already been created elsewhere. Revenue flatlined because, while the ad spend was real, growth was not.
Net-new customer acquisition requires a different setup.
Shopping campaigns structured to surface products to people who have never heard of the brand.
Search campaigns built around non-branded, high-intent keywords.
Layered PMax configurations that limit the system from defaulting to the easiest conversions.
When Google has enormous reach into new audiences, treating it purely as a closing channel leaves most of that opportunity untouched.
Mistake 2: Not knowing how to get the most out of Google’s core levers
Paid social experience transfers to Google in some ways, but there are four areas where we see the biggest knowledge gaps.
Search intent
Ads on social media are an interrupting moment. Ads in search engines meet people as they’re looking for something you offer. This changes so much about campaign structure, ad copy, and keyword targeting.
Upper-funnel terms and lower-funnel terms require different approaches, bids, and landing pages. Collapsing them into a single campaign structure is one of the fastest ways to dilute intent and waste budget on traffic that was never going to convert.
Data feed optimization
For ecommerce brands running Shopping and retail Performance Max, the product feed is the foundation everything else is built on. Weak titles, missing attributes, and poor categorization limit how often your products show up and who sees them.
Most brands (including Google-native ones) underinvest here because the work is unglamorous. But a well-optimized feed consistently outperforms one that’s neglected after setup.
Keyword research
Paid search is a keyword-driven channel, which makes keyword strategy its own discipline. Understand match types, search volume, commercial intent, and the relationship between what people type and what they actually want. This takes time to develop, but brands that skip this step usually over-restrict their reach or bleed spend on irrelevant traffic.
Landing pages
Sending high-intent but unfamiliar visitors straight to a product page on Google often underperforms. A more engaging landing page format, like an advertorial, puts that traffic in front of context and trust before asking for the sale.
Brands coming from paid social often overlook this because the funnel architecture they’re used to doesn’t require it.
Google’s algorithms need consistent data to make the best decisions for your account. But every time a campaign goes dark — for a day or a week — there’s a risk that the learning resets. What feels like a minor admin issue can mean weeks of degraded performance and wasted ad spend.
Two types of disruption come up more than any other.
Payments: Brands switching to invoice billing or changing card details mid-flight will sometimes see campaigns pause without realizing it until the damage is done. A lapsed payment that takes three days to resolve can cost far more than the bill itself once you factor in recovery time.
Tracking and feed integrity: A broken pixel means no conversion data, and forces Smart Bidding to optimize blind. A feed error in Merchant Center means products disappear from Shopping and Performance Max. Neither of these failures are loud, and they tend to surface slowly as declining performance that gets misattributed.
They are both preventable with automated alerts, weekly feed audits, and a person or AI agent responsible for monitoring account health between reporting cycles. The cost of oversight is low compared to what happens if you only discover issues after the fact.
Mistake 4: Building a campaign structure that’s too granular
The instinct among detail-oriented advertisers is to segment everything because it feels like control on the surface.
One campaign per product category.
One ad group per keyword.
Separate budgets for every audience.
But Google’s automation needs data to make good decisions. When you spread your budget across too many campaigns, each one operates on thin resources and even thinner information. Smart Bidding can’t optimize effectively without sufficient conversion volume, so campaigns stuck below that threshold tend to underperform and stay there.
By over-segmenting, you’ve created the appearance of precision while actually limiting the system’s ability to learn.
The same logic applies to budget. Ten campaigns with a modest shared budget will almost always produce worse results than three well-funded ones. Google needs room to test, adjust, and find the traffic worth paying for. Fragmented budgets don’t allow it to do that.
Build a tighter structure with fewer campaigns, clearly defined goals, and enough budget to compete. This gives the algorithm what it needs while keeping the account manageable enough to oversee effectively.
Mistake 5: Leaving campaigns on Max Conversion Value with no ROAS targets
Max Conversion Value is a Smart Bidding strategy that tells Google to spend your budget in whatever way generates the highest total conversion amount – no ceiling, no floor, no efficiency guardrail. Left unsupervised, it will find conversions, but won’t care what it costs to get them.
For brands new to Google Ads, this setting can trick you into thinking you’re crushing it. Conversion value shoots up in the right direction, making the account appear healthy. The problem surfaces when you look at what you actually spent to generate that value.
Without a target ROAS, Google has no efficiency quotient, and optimizes for volume, not profitability. But the fix is straightforward.
Once you have enough conversion data, set a realistic target.
A ROAS goal gives the algorithm a constraint, and shifts the objective from spending budget to spending it well.
Targets set too aggressively too early can starve campaigns of traffic before they’ve had a chance to learn.
Exercise patience, and a willingness to adjust gradually rather than chasing the ideal number from day one.
Mistake 6: Underfunding campaigns and keeping them stuck in learning
When you launch a Google campaign or make a significant change (like doubling the budget), it enters a new learning period. This is the window for gathering data, testing different auctions, and calibrating toward the conversion patterns you’ve defined.
It’s a normal part of how the platform works, and every campaign goes through it.
But the learning period requires a minimum volume of conversions to complete. Google typically needs around 30-50 conversion events in a short window before bidding stabilizes. A campaign that’s underfunded for this milestone will stay in learning indefinitely.
It’s a common trap for brands being cautious when testing Google.
You run your first campaign on a small budget.
CPAs are inflated, and data is inconclusive, so you don’t invest more or cut it entirely.
In reality, the campaign never had what it needed to graduate out of the learning phase.
You walk away from net new revenue before you’ve even scratched its surface.
Funding a new campaign adequately from the start — even if it means consolidating into fewer campaigns and chasing fewer goals — gives it the best chance of learning fast and delivering accurate results sooner.
Adding Google to the mix is the right call: Here’s what to do next
Diversifying away from a single ad platform is one of the smartest moves an ecommerce brand can make once it’s mature enough to fight on two fronts. It removes growth from the anchor of one platform’s algorithm changes, auction dynamics, seasonality, terms of service, etc.
Adding Google to Meta also gives you access to a different kind of demand that is actively expressed rather than passively targeted, which is a meaningful advantage worth building on.
These six mistakes are not reasons you should avoid Google, but a preventative guide to help you approach it with realistic expectations and enough patience to let the system learn. Treating it like a direct analog of what you’re already doing on Meta will make you leave before seeing what’s truly possible.
Google launched a channel performance timeline view in Performance Max. It gives you a clearer breakdown of how Search, YouTube, Display, and other channels contribute to campaign results over time.
What’s new. A timeline graph shows channel-level contributions over a selected period, paired with investment and performance filters. You can quickly see which channels are pulling their weight — and which aren’t.
Yellow box – Channel Performance Evolution Over Time
Pink box (right) – All Ads, Ads Using Product Lists, Ads Using Video
Why we care. Performance Max campaigns run across multiple channels at once, making it difficult to see where your budget is most effective. This gives you a timeline view of channel-level contributions — so if YouTube is underperforming while Search drives most conversions, you can see it without digging through exports or relying on guesswork. You can spot channel-level trends earlier and adjust your asset strategy or budget accordingly.
The big picture. This view gives you a more actionable way to evaluate PMax performance without relying solely on Google’s automated decisions.
Bottom line. It’s not full transparency, but it’s a meaningful step in the right direction. You get a cleaner way to spot PMax trend anomalies early and adjust accordingly.
First spotted. This update was first spotted by Axel Falck, Head of Search at Le Mage du SEA, who shared it on LinkedIn.
Tracking your brand’s visibility in AI-powered search is the new frontier of SEO. The tools built to do this are expensive, often starting at $300 to $500 per month and quickly rising from there. For many, that price is a nonstarter, especially when custom testing needs go beyond what off-the-shelf software can handle.
I faced this exact problem. I needed a specific tool, and it didn’t exist at a price I could afford, so I decided to build it myself. I’m not a developer. I spent a weekend talking to an AI agent in plain English, and the result was a working AI search visibility tracker that does exactly what I need.
Below is the guide I wish I’d had when I started: a step-by-step playbook for building your own custom tool, covering the technology, the process, what broke, and how to get it right faster.
The problem: A custom tool for a complex landscape
My goal was to automate an AI engine optimization (AEO) testing protocol. This wasn’t just about checking one or two models. To get a full picture of AI-driven brand visibility, I knew from the start that we had to track five distinct, critical surfaces:
ChatGPT (via API): The most well-known conversational AI.
Claude (via API): A major competitor with a different response style.
On top of that, I needed to score the results using a custom 5-point rubric: brand name inclusion, accuracy, correctness of pricing, actionability, and quality of citations. No existing SaaS tool offered this exact combination of surfaces and custom scoring. The only path forward was to build.
Here are a few screenshots of the internal tool as it stands. You can see some of my frustration in the agent chat window.
The method: Using vibe coding to build the tool
This project was built using vibe coding, a way of turning natural language instructions into a working application with an AI agent. You focus on the goal, the “vibe,” and the AI handles the complex code.
This isn’t a fringe concept. With 84% of developers now using AI coding tools and a quarter of Y Combinator’s Winter 2025 startups being built with 95% AI-generated code, this method has become a viable way for non-developers to create powerful internal tools.
You can replicate this entire project with just three things, keeping your monthly cost under $100.
Replit Agent
This is a development environment that lives entirely in your web browser. Its AI agent lets you build and deploy applications just by describing what you want. You don’t need to install anything on your computer. The plan I used costs $20/month.
DataForSEO APIs
This was the backbone of the project. Their APIs let you pull data from all the different AI surfaces through a single, unified system.
You can get responses from models like ChatGPT and Claude, and pull the specific results from Google’s AI Mode and AI Overviews. It has pay-as-you-go pricing, so you only pay for what you use.
Direct LLM APIs (optional but recommended)
I also set up direct connections to the APIs for OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini). This was useful for double-checking results and debugging when something seemed off.
The playbook: A step-by-step guide to building your tool
Building with an AI agent is a partnership. The AI will only do what you ask, so your job is to be a clear and effective guide.
Here’s a repeatable framework that will help you avoid the biggest mistakes.
Step 1: Write a requirements document first
Before you even open Replit, create a simple text document that outlines exactly what you need. This is your blueprint. Include:
The core problem you’re solving.
Every feature you want (e.g., CSV upload, custom scoring, data export).
The data you’ll put in, and the reports you want out.
Any APIs you know you’ll need to connect to.
Start your conversation with the AI agent by uploading this document. It will serve as the foundation for the entire build.
Step 2: Ask the AI, ‘What am I missing?’
This is the most important step. After you provide your requirements, the AI has context. Now, ask it to find the blind spots. Use these exact questions:
“What am I not accounting for in this plan?”
“What technical issues should I know about?”
“How should data be stored so my results don’t disappear?”
That last question is critical. I didn’t ask it, and I lost a whole batch of test results because the agent hadn’t built a database to save them.
Step 3: Build one feature at a time and test it
Don’t ask the AI to build everything at once. Give it one small task, like “build a screen where I can upload a CSV file of prompts.”
Once the agent says it’s done, test that single feature. Does it work? Great. Now move to the next one.
This incremental approach makes it much easier to find and fix problems.
When it’s time to connect to an API like DataForSEO, don’t assume the AI knows how it works. Find the API documentation page for what you’re trying to do, and give the URL directly to the agent.
A simple instruction like, “Read the documentation at this URL to implement the authentication,” will save you hours of frustration. My first attempt at connecting failed because the agent guessed the wrong method.
Step 5: Save working versions
Before you ask for a major new feature, save a copy of your project. In Replit, this is called “forking.” New features can sometimes break old ones.
I learned this when the agent was working on my results table, and it accidentally broke the CSV upload feature that had been working perfectly. Having a saved version makes it easy to go back and see what changed.
Nearly everything will break at some point. That’s part of the process. Here are the most common issues I ran into, and the lessons I learned, so you can be prepared.
Problem
The lesson and how to fix it
1. API authentication fails
The agent will often try a generic method.
Fix: Give the agent the exact URL to the API’s authentication documentation.
2. Results disappear
The agent may not build a database by default, storing data in temporary memory instead.
Fix: In your first step, ask the agent to include a database for persistent storage.
3. API responses don’t show up
You might see data in your API provider’s dashboard, but it’s missing in your app. This is usually a parsing error.
Fix: Copy the raw JSON response from your API provider, and paste it into the chat. Say, “The app isn’t displaying this data. Find the error in the parsing logic.”
4. Model responses are cut short
An LLM like Claude might suddenly start giving one-word answers. This often means the token limit was accidentally changed.
Fix: After any update, run a quick test on all your connected AI surfaces to ensure the basic parameters haven’t changed.
5. API results don’t match the public version
ChatGPT’s public website provides web citations, but the API might not.
Fix: Realize that APIs often have different default settings. You may need to explicitly tell the agent to enable features like web search for the API call.
6. Citation URLs are unusable
Gemini’s API returned long, encoded redirect links instead of the final source URLs.
Fix: Inspect the raw data. You may need to ask the agent to build a post-processing step, like a redirect resolver, to clean up the data.
7. Your app isn’t updated
You build a great new feature, but it doesn’t seem to be working in the live app.
Fix: Understand the difference between your development environment and your production app. You need to explicitly “publish” or “deploy” your changes to make them live.
The real costs: Is it worth it?
Building this tool saved me a significant amount of money. Here’s a simple cost comparison against a mid-tier SaaS tool.
Item
DIY tool (My project)
SaaS alternative
Software subscription
~$20/month (Replit)
$500/month
API usage
~$60/month (variable)
Included
Total monthly cost
~$80/month
$500/month
The biggest cost is your time. I spent a weekend and several evenings building the first version. However, I now have an asset that I can modify and reuse for any client without my costs increasing.
The hidden costs are real: there’s no customer support, and you are responsible for maintenance. But for many, the savings and customization are worth it.
This approach isn’t for everyone. Here’s a simple guide to help you decide.
Build your own if:
You need a custom testing method that no SaaS tool offers.
You want a white-labeled tool for your agency.
Your budget is tight, but you have the time to invest in the process.
Stick with a SaaS tool if:
Your time is more valuable than the monthly subscription fee.
You need enterprise-level security and dedicated support.
Standard, off-the-shelf features are good enough for your needs.
For many SEOs, the answer is clear. The ability to build a tool that works exactly the way you do, for less than $100 a month, is a game-changer.
The process will be frustrating at times, but you will end up with something that gives you a unique advantage. The era of the practitioner-developer is here. It’s time to start building.
Google Ads added an auto-apply setting to experiments. It’s on by default, so winning variants can go live without review.
How it works. You choose directional results (default) or statistical significance at 80%, 85%, or 95% confidence. One safeguard: if your chosen success metric performs significantly worse in the test arm, the change won’t auto-apply.
Why we care. Experiments are one of the most powerful tools in your account. Automating apply can speed testing, but removes a checkpoint where you catch unintended consequences before they hit live campaigns.
The catch. Experiments allow only two success metrics. A third metric you care about — one you didn’t or couldn’t select — can decline unnoticed. Guardrails protect what you told Google to watch, not everything that matters.
Bottom line. Auto-apply is a reasonable shortcut for simple tests. For anything consequential, keep manual review. Run the experiment, reach significance, then review full data before you apply changes.
First seen. Google Ads specialist Bob Meijer shared this update on LinkedIn.
Bing appears to be testing an expanded sponsored products section in its shopping results, featuring a double-row carousel that takes up significantly more space than the current format.
The test. The format pairs a large, double-row sponsored carousel with organic cards from individual sites below.
Why we care. If this rolls out broadly, it means more screen space for sponsored products — typically leading to higher visibility and more clicks if you run Microsoft Shopping campaigns. The double-row carousel is also more visually competitive, bringing Bing’s shopping ads closer to Google Shopping’s prominence.
The catch. The test appears limited — not all users see it. Search industry veteran Mordy Oberstein reported a more compact layout, suggesting Bing is still in early testing.
Bottom line. Bing runs many SERP experiments that never fully launch, so watch this one for now. If you run Microsoft Shopping campaigns, monitor impressions for any lift if the format expands.
First spotted. Sachin Patel shared a screenshot of the test on X.
SEO tools were the most replaced martech application in 2025 — but not for the reason you might expect.
According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.
At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences — all of which challenge traditional keyword tracking and ranking-based workflows.
But the data tells a more nuanced story.
SEO tools: most replaced, but stabilizing
Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.
In other words, they’re now the most commonly replaced — but also more stable than before.
That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.
Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:
CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the survey’s history.
MAPs, email platforms, and CMS tools also declined compared to 2024.
Why SEO tools are being replaced
So if SEO tools aren’t being swapped out due to instability, what’s driving the changes?
The survey points to three primary factors:
1. AI capabilities
For the first time, the survey asked about AI’s role in replacement decisions — and the impact was significant.
37.1% cited AI capabilities as an important factor.
33.9% said they wanted AI capabilities when replacing a tool.
This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:
Content generation and optimization.
SERP analysis and intent modeling.
Workflow automation.
In many cases, replacing your SEO tool isn’t about abandoning SEO — it’s about upgrading to AI-native capabilities.
2. Cost pressures
Cost has become a major driver of martech replacement decisions, including SEO tools:
43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
That’s up sharply from 23% in 2024 and 22% in 2023.
This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.
3. Changing needs in a shifting search landscape
As search behavior changes, so do expectations for SEO platforms.
Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:
Surface insights across AI-driven SERPs
Track visibility beyond clicks
Integrate with broader marketing and data systems
That evolution is likely contributing to replacement activity — even as overall stability increases.
AI is reviving custom-built SEO tools
One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.
Replacing commercial martech tools with homegrown applications accounted for:
8.1% of replacements in 2025
Up from 3.4% in 2024 and 5% in 2023
This marks a meaningful shift after years of near-total reliance on commercial platforms.
“AI-assisted coding is changing the calculus of build vs. buy,” said martech analyst Scott Brinker. “It’s easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.”
For SEO teams, this could mean more organizations building:
Custom data pipelines.
Proprietary SERP tracking systems.
AI-driven analysis tools tailored to their specific needs.
Other martech categories show even greater stability
While SEO tools led in total replacements, the broader martech landscape is becoming more stable.
Several major categories saw declining replacement rates in 2025, including:
CRM platforms (down more than 12% year over year)
Marketing automation platforms
Email distribution tools
Content management systems
This suggests that many organizations are settling into core systems while selectively updating areas — like SEO — that are changing faster.
Methodology
Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.
A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.
AI-powered ad bidding systems are highly sophisticated, but conversion tracking hasn’t kept pace. Ad platforms encourage advertisers to track more actions, while many experts argue for tracking only final outcomes.
Both are partly true. Neither is universally correct.
In practice, both over- and under-signaling can hurt PPC performance. Too many loosely defined micro-conversions introduce noise. Bidding shifts toward easy, low-value actions, inflating reported performance while eroding real results. Too few signals leave the system without enough data to learn.
This dynamic is most visible in Performance Max and Search plus PMax setups, where the system optimizes toward whatever signals it’s given — regardless of whether they reflect real business value.
Here’s what happens when micro-conversions outnumber real conversions, why bidding systems behave this way, and how to build a conversion framework that aligns signal volume with business impact.
The myth of the ‘data-hungry’ PPC algorithm
The idea that algorithms need as much data as possible has been repeated so often that it’s become an assumption. Platform documentation, automated recommendations, and many PPC blog posts reinforce the same message: more signals equal better learning.
Bidding systems require a minimum level of signal density to function, but they don’t benefit from indiscriminate micro-conversion signals. More data isn’t always better data.
Adding low-intent or loosely correlated actions often degrades performance by shifting optimization toward behaviors that don’t correlate with revenue.
Machine learning systems don’t evaluate the strategic relevance of a signal. They evaluate frequency, consistency, and predictability.
When an account includes a mix of high- and low-intent micro-conversions — purchases, add-to-carts, pageviews, video plays, and soft leads — the system doesn’t inherently understand which actions matter most to the business.
Without a clear value hierarchy, the bidding algorithm treats all signals as valid optimization targets. This creates a structural bias toward high-frequency, low-value actions because they’re easier and cheaper to achieve. The result is a bidding pattern that maximizes conversion volume while minimizing business impact.
Why value-based bidding helps, but can’t fix everything
Many practitioners advocate for value-based bidding, where each micro-conversion is assigned a relative financial or hierarchical value. In theory, this helps the system understand which signals matter most. You can also instruct the platform to maximize conversion value, which should push the algorithm toward higher-value purchases or sales-qualified leads (SQLs).
But value-based bidding isn’t a complete solution. When too many micro-conversions are included — even with assigned values — the system can still become overwhelmed. A high volume of low-intent signals can dilute intent and distort the value hierarchy.
The issue isn’t just a lack of context.
Every signal becomes part of the optimization math. If the model weighs signals by volume rather than business importance, low-intent micro-conversions will dominate. Assigning values helps clarify priorities, but it can’t override signal imbalance. At a certain point, the math wins.
How PPC bidding follows the path of least resistance
In practice, this shows up as a “path of least resistance” problem.
Even with values assigned, bidding algorithms still optimize toward the signals they’re given. When low-intent micro-conversions are included as Primary actions, the system treats them as efficient ways to increase conversion volume. This isn’t an error. It’s expected behavior for a model designed to maximize conversions within a set budget.
When those signals occur more frequently, the system gravitates toward them. A signal that fires hundreds of times a day will exert more influence than a high-value action that fires only a handful of times per week.
This dynamic is especially visible in PMax. The system evaluates signals across channels, audiences, and placements, and pursues the cheapest, most abundant path to conversion. If a contact page visit or key pageview is treated as a Primary signal, PMax may prioritize it over a purchase or SQL because it’s easier to achieve at scale.
That’s why PMax often reports strong conversion volume and low CPA while revenue remains flat or declines. The system is performing as instructed, but the inputs lack a disciplined signal hierarchy. Value-based bidding improves structure, but without restraint in the number and type of signals, it can’t fully prevent the problem.
When low-value actions are tracked as Primary conversions, platform-reported performance becomes disconnected from business outcomes. Metrics such as CPA, ROAS, and conversion rate may improve, but those gains are often illusory.
For example:
A campaign may show a 40% reduction in CPA because the system is optimizing toward pageviews rather than purchases.
ROAS may increase because the system attributes inflated value to actions that don’t correlate with revenue.
Conversion volume may spike due to high-frequency micro-conversions.
These patterns create a false sense of success, leading advertisers to scale budgets prematurely and erode contribution margin.
Diluted intent and double-counting
When multiple micro-conversions are tracked as Primary, a single user journey can generate multiple wins for the algorithm.
For example, a user who views a product page, signs up for a newsletter, and adds an item to cart may be counted as three conversions from a single click. If values are assigned to each step, conversion value and ROAS become inflated as well.
This inflates conversion volume, inflates conversion value, and distorts bidding behavior. The system interprets this as a high-value user and begins overbidding on similar traffic, even if the user never completes a purchase.
In many accounts, micro-conversions outnumber real conversions by a ratio of 500 to 1 or more. This imbalance has significant implications for bidding behavior.
When frequency overwhelms value
If an account records 500 pageviews, 200 add-to-carts, 50 lead form starts, 10 purchases, and all actions are treated as Primary, the system receives 760 signals for every 10 that actually matter.
Without distinct values, the algorithm can’t differentiate between a $0.05 action and a $500 action. It optimizes toward the most frequent signals because they provide the clearest path to increasing conversion volume.
Even when values are assigned, overvaluing micro-conversions teaches the algorithm to pursue easy wins. The result is a maximized conversion value metric that looks strong in the dashboard but isn’t reflected in actual sales.
The consequences of signal imbalance
When micro-conversions dominate the signal mix:
Bidding shifts toward low-intent traffic because it produces more conversions.
Budgets are allocated inefficiently as the system chases cheap signals.
Real ROAS declines, even as platform-reported ROAS appears strong.
Scaling becomes risky because the system is optimizing toward the wrong outcomes.
That’s why accounts with high micro-conversion volume often show strong platform metrics but weak financial performance.
When micro‑conversions stop helping
Micro-conversions are useful when an account lacks enough real conversion volume to support stable bidding. However, once a campaign consistently reaches 30 to 60 real conversions per month, they no longer provide meaningful benefit.
At that point, the system has enough high-quality data to optimize effectively. Continuing to rely on micro-conversions introduces unnecessary noise and increases the risk of misaligned bidding.
This is the point to transition from tCPA to tROAS and let real revenue guide optimization.
Primary actions influence bidding, while Secondary actions provide visibility without affecting optimization. This four-part litmus test helps determine which actions should be treated as Primary.
1. The volume threshold
Micro-conversions should be used only when real conversion volume isn’t sufficient to support stable bidding. As a general guideline:
Below 30 real conversions per month: A high-intent micro-conversion may be needed to give the system enough data.
30 to 60 real conversions per month: Begin reducing reliance on micro-conversions.
60 or more real conversions per month: Remove micro-conversions from Primary status and rely on revenue-based optimization.
This threshold ensures micro-conversions serve as a temporary bridge, not a permanent crutch.
2. The necessary step test
A Primary action should represent a required step in the conversion journey, such as:
Add to cart.
Begin checkout.
Start lead form.
Actions that aren’t required steps — such as contact page visits, whitepaper downloads, or time on site — shouldn’t be treated as Primary. These may indicate interest, but they don’t reliably predict revenue.
3. The valuation test
If an action can’t be assigned a realistic financial value, it shouldn’t be used as a Primary conversion. Assigning arbitrary values introduces risk and can distort bidding behavior.
Actions such as time on site or scroll depth fail this test because they don’t consistently correlate with revenue. However, if CRM data shows a reliable statistical correlation with revenue, that can justify including the action.
4. The simplicity test
Even if multiple actions pass the first three tests, only the strongest one or two should be designated as Primary. Including too many Primary actions increases the risk of double-counting and overbidding.
A streamlined Primary set ensures the system focuses on the most meaningful signals.
Use Secondary conversions as a diagnostic tool
Secondary conversions provide visibility into user behavior without influencing bidding. They’re a useful diagnostic tool for understanding funnel performance and evaluating new signals.
Visibility without optimization risk
Tracking actions such as newsletter signups, video views, or soft leads as Secondary lets you monitor engagement without shifting bidding toward low-value behaviors.
This approach preserves data integrity while maintaining control over optimization.
Funnel analysis and bottleneck identification
Secondary conversions reveal where users drop off in the funnel. For example:
High Add-to-Cart volume but low purchase volume indicates checkout friction.
High MQL volume but low SQL volume suggests targeting or qualification issues.
These insights support more informed optimization decisions.
Safe testing environment
New signals should be tracked as Secondary for several weeks before being considered for Primary status. This allows you to evaluate frequency, correlation with revenue, stability, and predictive value.
Only signals that demonstrate consistent value should be promoted to Primary.
Assign micro-conversion values using a safety discount
When micro-conversions are used, they must be assigned values that reflect their true contribution to revenue. Overvaluing micro-conversions is a common cause of inflated platform performance and misaligned bidding.
Calculating baseline value
The baseline value of a micro-conversion is determined by:
Baseline value = Conversion rate to sale x Average order value (AOV) or profit
For example:
Ecommerce: If 25% of add-to-carts convert and AOV is $1,600, the baseline value is $400.
Lead generation: If 10% of demo requests convert to $5,000 profit, the baseline value is $500.
Applying the 25% safety discount
The baseline value shouldn’t be used directly. Instead, apply a 25% reduction:
$400 becomes $300.
$500 becomes $375.
This discount helps prevent overbidding by ensuring the system doesn’t overvalue micro-conversions relative to actual revenue.
Undervaluing is safer than overvaluing
Undervaluing micro-conversions may slightly slow learning, but it doesn’t distort bidding. Overvaluing them can push the system toward low-intent traffic, leading to rapid budget misallocation.
The safety discount provides a buffer that protects contribution margin while still supplying useful data.
Where PPC experts draw the line on micro-conversions
Practitioners consistently point to the same principle: signal discipline matters more than signal volume.
Julie Friedman Bacchini emphasizes that every conversion action becomes a signal the system optimizes toward. Using more than one Primary action introduces ambiguity — “it’s suddenly muddier” — and skipping values makes it easier for the system to latch onto lower-value signals. Values don’t need to be exact, but they must be relative.
She also notes that micro-conversions can help low-volume campaigns reach data thresholds, but they aren’t a substitute for real Primary conversions. Removing them later can mean “starting over to a large extent on system learning.”
Jordan Brunelle takes a similarly disciplined approach: “There can definitely be too many.” He recommends starting with one strong signal of intent and watching the ratio between micro-conversions and real outcomes. If volume is high but outcomes are low, it often signals a targeting or signal issue.
Signal discipline is the real competitive advantage
The debate around micro-conversions often focuses on quantity. But the real differentiator isn’t volume, but discipline.
Bidding systems optimize toward the signals they’re given. When the signal mix is cluttered, performance drifts. When it’s clear and intentional, the system aligns with real business outcomes.
Micro-conversions should be selectively used and continuously evaluated. Start with a simple audit:
Identify all Primary conversions.
If more than two or three actions are Primary, the account is likely over-signaled.
Apply the litmus test.
Remove any Primary actions that fail the volume, necessary step, valuation, or simplicity tests.
Move nonessential actions to Secondary.
Assign conservative values to remaining micro-conversions.
Use the safety discount to avoid overbidding.
Monitor performance for 30 days, focusing on revenue, contribution margin, and signal distribution.
Micro-conversions should be a temporary bridge. Once real conversion volume is sufficient, optimization should be guided by revenue. A disciplined signal architecture gives automation what it needs to perform as intended: efficient, predictable, and aligned with real business outcomes.
If you’re a lawyer, college administrator, or financial services provider, you’ve likely seen the frustrating “Eligible (Limited)” status in your Google Ads account. It can feel like you’re fighting Google with one hand tied behind your back when your remarketing lists, exact match keywords, and more don’t work as intended.
While it might feel like Google Ads is out to get you when you operate in a so-called “sensitive interest category,” there are specific reasons for these rules. More importantly, there are specific ways to succeed despite them.
This article will cover what the personalized advertising policies are, what they mean for your account, and five specific tactics you can use to succeed with Google Ads.
Why does Google have personalized advertising policies?
Google provides detailed explanations in its official policy documentation, but it comes down to two things: legal requirements and ethical standards.
In the United States, for example, the Fair Housing Act and employment laws prevent discrimination based on age, gender, or location. If you’re advertising a job opening or a new apartment complex, Google can’t allow you to exclude people based on those demographics because doing so would be against the law.
Then there’s the ethical side. Imagine you’re running a rehab center. If someone visits your site, Google’s “sensitive interest” policy prevents you from following them around the internet with targeted banner ads like, “Still struggling with addiction? Come to our clinic.”
That kind of remarketing is intrusive and, frankly, predatory when it targets someone’s health and struggles. To protect the user experience and maintain a sense of privacy, Google limits how personal data can be used in these high-stakes industries.
What can’t you do in a sensitive interest category?
If you fall into one of these categories — housing, employment, credit, healthcare, or legal services — the biggest impact is usually on your audience targeting.
Here’s what you can’t use:
Website or App Remarketing Lists, including the Google-engaged audience: You can’t target people who have previously visited your website or used your app.
Customer Match: You can’t upload your own email lists or phone numbers to target existing clients.
YouTube Audiences: You can’t target people based on how they’ve interacted with your videos.
Custom Segments: You aren’t allowed to build specialized audiences based on specific search terms or types of websites people visit
For certain categories in certain countries, like housing, credit, and employment in the United States, there’s further “demographic stripping” — you can’t target by age, gender, parental status, or ZIP code. Your Smart Bidding strategies won’t use these signals as inputs either.
The good news: What can you do in a sensitive interest category?
It’s easy to focus on what’s gone, but what still works is a much longer list. Even in a restricted industry, you still have access to the core engine of Google Ads. You can still use:
Keywords, feeds, and keywordless technology: These rely on intent (queries) rather than identity, so they are perfectly fine in Search, Shopping, and Performance Max.
Google’s audiences: Affinities, In-Market, Detailed demographics, and Life Events segments are still fully at your disposal, where eligible, in Demand Gen, Display, Video, Search, and Shopping.
Optimized targeting: Google’s AI can still find people likely to convert based on your historical converters, in Demand Gen, Display, and Performance Max.
Content Targeting: You can choose to show your ads on specific keywords, topics, and placements in Display and Video campaigns.
Conversion tracking: Yes, you can still track conversions and use features like Enhanced Conversions, Offline Conversion Import, and Consent Mode. While your internal legal team may have reservations or restrictions around your website tracking, Google’s Personalized advertising policy doesn’t restrict any conversion tracking.
5 strategies to win in sensitive categories
If you want to move the needle without relying on remarketing, you need to rethink your account structure and messaging. Here are five things you can do right now.
1. The “Separate Domain” strategy
If your business offers a mix of services — some sensitive, some not — don’t let the sensitive ones “poison” your whole account. Think of a spa that offers haircuts, pedicures, and Botox. Haircuts are fine; Botox is a medical procedure that triggers sensitive category restrictions.
If you put them all on one site, your entire remarketing capability might get shut down. Consider putting the sensitive service on a separate domain and a separate Google Ads account. This lets you use every available tool for your main business while the sensitive portion operates under the necessary restrictions.
2. Choose Demand Gen over Display
If you want to use image or video ads, use Demand Gen instead of the standard Display Network. In my experience, Demand Gen delivers higher-quality audiences and tends to perform better in restricted niches.
3. Lean into phrase and broad Match
You might be tempted to stick to Exact Match keywords to keep things tight. However, in sensitive categories, Google may restrict ads on very narrow, specific queries for privacy reasons. If your Exact Match keywords aren’t getting impressions, try Phrase or Broad Match. This gives the algorithm more room to find users searching for the same thing with slightly different phrasing that may be less restricted.
Think of it like fishing: if you can’t use a spear, use a net. You’ll catch some fish you don’t want, but that tradeoff helps you catch the ones you do want more easily.
4. Feed the AI with offline conversion tracking
Most businesses in these categories, such as law firms or banks, don’t make sales on their websites. The website generates a lead, and the sale happens over the phone or in an office.
If you want Google to find better users, you must feed that real-world data back into the system. Use Offline Conversion Tracking (OCT) to show Google which leads became customers. Even if you must navigate HIPAA or other privacy regulations, there are ways to do this safely.
Consult your legal team, but don’t skip this step. It’s the best way to train the algorithm when you can’t use your own audiences and to ensure Smart Bidding works at its full potential.
5. Creative-Led targeting
When you can’t tell Google who to target with a list, you have to tell the user who the ad is for through your creative. Your headlines and images should qualify the lead.
Be specific in your copy. For example, instead of “Need a Lawyer?” try “Defense Attorney for Small Business.” This attracts your target audience and encourages people who aren’t a fit to scroll past, saving you money and improving your conversion rate.
Running Google Ads in a sensitive category is a challenge, but it’s far from impossible. By shifting your focus from who the person is to what they’re looking for and how you speak to them, you can still drive incredible results.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.
AI has changed how I work after nearly two decades in digital marketing. The shift has been meaningful, freeing up time, reducing the grinding parts of the job, and making some genuinely hard tasks faster.
That doesn’t mean it does the work for you, transforms everything overnight, or saves you 40 hours a week. In real-world SEO, with real clients and real deadlines, it’s a tool that makes parts of the job easier, not something that replaces the work itself.
Here are 20 ways I actually use it. Some are specific to SEO. Some are broader, but relevant to anyone working in the industry. All of them are practical, tested, and honest about their limitations.
Content creation and copywriting
1. Writing first drafts
The single best way to use AI for content is to stop expecting it to produce something publishable and start treating it as a very fast first-draft machine.
Feed it your brief, your target keyword, your audience, and your angle. Get a structure back.
Then rewrite it in your voice. Add in the expertise that only you know, not a vanilla version of what’s online.
The content AI produces out of the box is average. Your job is to make it good. Reference real-life stories, case studies, and statistics, and showcase your personal viewpoint and expertise.
The time savings are in not starting from a blank page.
2. Generating meta title and description variations
Give Claude or ChatGPT your target keyword, page topic, and character limits. Ask for 10 variations of your meta title and descriptions. You’ll use one, maybe combine two, but the process takes two minutes instead of 20. For large sites with hundreds of pages, this alone is worth the subscription.
Many tools allow you to upload CSV files, add AI’s suggested ideas, and download them for review. Don’t skip this step. A human eye is where the value sits
3. Refreshing underperforming content
Paste an existing page or blog post that has dropped in rankings. Ask AI to identify what’s missing, what could be expanded, and what feels outdated.
It won’t always be right, but it gives you a starting point instead of reading the whole thing yourself with fresh eyes you don’t have at 4 p.m. on a Thursday.
Make sure to give context. Long prompts with lots of detail will produce much better results than pasting a page in cold.
4. Generating FAQ sections
Prompt AI to generate the 10 most common questions for your target keyword. Cross-reference with People Also Ask and your own research.
Answer them, and you now have an FAQ section, featured snippet opportunities, and a content gap analysis in about 10 minutes.
5. Writing alt text at scale
Nobody enjoys writing alt text for 200 product images. Describe the image, give it the context of the page it sits on, and include the target keyword. Then ask for alt text that’s descriptive and naturally includes the term where relevant. It’s not glamorous, but it’s necessary and faster.
You can also run a website through Screaming Frog, export it to a CSV file, upload it to your AI of choice, and ask it to write the alt text. This only works well if the file names are descriptive, and again, a human eye is key. This is about increasing speed, rather than handing it over to AI completely.
Not everyone working in SEO has a developer background. AI is useful for:
Translating technical error messages.
Explaining what a server log is telling you.
Helping you understand why a page is excluded from indexing.
Paste in the output, ask it to explain it in plain English, and then ask what the fix should be. Verify the answer, but it gets you most of the way there.
7. Writing schema markup
Schema is one of those things everyone knows they should be doing more of, and nobody finds especially enjoyable.
Describe the content of your page to your AI of choice, tell it what schema type is relevant (FAQ, Article, LocalBusiness, Product, etc.), and ask it to generate the JSON-LD.
Check it in Google’s Rich Results Test before implementation. This used to take me 20 minutes per page type. Now it takes five.
8. Creating regex for Google Search Console
If you use regex in GSC filters and you’re not a developer, AI is your new best friend. Describe what you’re trying to filter, for example, all URLs containing a specific subfolder, or all queries including a particular term, and ask for the regex string.
It gets it right more often than not, and you can ask it to explain the logic so you actually understand what you’re implementing.
9. Analyzing crawl data with prompts
If you export a crawl from Screaming Frog or Sitebulb and you’re not sure what to prioritize, paste the summary data into your AI tool and ask it to help you identify the highest-priority issues based on the site’s goals.
It won’t replace your expertise, but it’s a useful sounding board when you’re staring at a spreadsheet with 47 issues and a client call in an hour.
This is one of the most underrated uses of AI in SEO work. You have the data. You have the graphs. What takes time is writing the commentary that explains what happened, why, and what comes next.
Feed AI your key metrics and the context of what was happening that month (algorithm updates, campaign launches, seasonality), and ask it to draft the narrative section of your report. Edit it, add your actual insight, but stop writing it from scratch every month.
You can even upload reports from various data sources and ask it to combine and summarize them. This saves me hours every month when I’m putting together reports.
11. Summarizing long reports for clients
Not every client wants to read a 12-page report. Ask AI to summarize your report into a five-bullet executive summary. Give it to clients at the top of the document.
The ones who want details will read on. The ones who don’t will feel informed without asking you to talk them through every chart on the next call.
Ask AI to create the executive summary for someone who doesn’t know anything about SEO, and it’ll give you something simple and easy to understand.
12. Identifying anomalies in data
Paste a table of your keyword rankings or traffic data, and ask AI to flag anything that looks unusual, including significant drops, unexpected gains, or patterns that don’t match the previous period.
It won’t replace proper analysis, but it’s a useful first pass when you’re managing a large amount of information and can’t give every dataset the attention it deserves.
List your top three competitors and your own site. Ask AI to help you think through what content topics they’re likely covering that you’re not, based on their positioning and audience.
Then, validate that with actual keyword research tools. AI can’t see competitor data directly, but it’s useful for hypothesis generation before you do the manual work.
14. Understanding a new industry quickly
When you take on a client in an industry you don’t know well, you need to get up to speed fast. Ask your AI to give you a primer on the industry:
Key terminology.
The main players.
The buying cycle.
How people typically search for solutions in this space.
What the common pain points are.
It saves you an embarrassing amount of time in discovery calls.
15. Identifying search intent mismatches
Paste a list of your target keywords and ask AI to categorize them by search intent: informational, navigational, commercial, and transactional. Then compare that against the page type you’re targeting them with.
You’ll almost certainly find mismatches. This is a task that’s straightforward to describe, but tedious to do manually across hundreds of keywords.
Everyone has had to write a difficult email, whether it’s explaining why rankings have dropped, why a deadline was missed, or why they need to do something you know they don’t want to do.
These emails take a disproportionate amount of emotional energy to write. Give your AI the situation, the context, and what you need the client to understand or do, and ask for a draft that’s clear, professional, and honest.
Edit it. Send it. Move on.
17. Writing SOPs and process documentation
If you’ve been meaning to document your processes and just haven’t gotten around to it, AI removes the excuse.
Describe a process out loud (or in rough notes), paste it in, and ask for a structured SOP with numbered steps, decision points, and notes.
The first version will need editing, but having a framework to work from is the difference between getting it done and it sitting on the to-do list for another quarter.
18. Preparing for client calls
Before a client call, paste in your recent report data, any issues from the previous month, and what you need to cover.
Ask your AI to help you structure the agenda and anticipate questions the client might ask based on the data. You’ll go into the call more prepared and less likely to be caught off guard.
Productivity and admin
19. Processing your own thinking
This one sounds vague, but it’s one of the ways I use AI most.
When I have a problem I can’t get clear on, a strategy decision I’m going back and forth on, or a piece of work I can’t find the right angle for, I talk it through with Claude (my AI buddy of choice) to clarify my own thinking. It asks questions, reflects things back, and helps me arrive at a point of view faster than I would staring at a blank document.
Ask your AI to be brutally honest with you. Otherwise, it’ll just keep agreeing with you and telling you that you’re truly an expert on every topic.
20. Building prompts you actually reuse
The biggest productivity gain from AI isn’t any individual use. It’s building a library of prompts that work for your specific workflow and reusing them consistently.
Every time you get a good result from an AI tool, save the prompt. Over time, you build a system, rather than starting from scratch every time. This is the thing most people skip, and it’s the thing that compounds.
Top tip: In the paid version of many AI tools, you can create projects and have specific instructions for each one. This is invaluable for saving time by not having to include all of this information in every prompt you use.
None of these tips replace the expertise, judgment, and client relationships that make a good SEO professional.
AI doesn’t know the business the way you do. It doesn’t understand the nuance of an industry, the history of an account, or the particular quirks of a contact you deal with regularly.
AI reduces the time spent on tasks that don’t require that expertise, so you have more of it available for the work that does.
Use AI as a tool. Stay skeptical of the hype. And for the love of good search results, edit everything before it goes anywhere near a client.
Barry Adams recently published “Google Zero is a Lie” in his SEO for Google News newsletter, arguing that the narrative of Google traffic disappearing is false and dangerous.
His data backs it up. Similarweb and Graphite data show only a 2.5% decline in Google traffic to top websites globally. Google still accounts for nearly 20% of all web visits.
The widely cited Chartbeat figure showing a 33% decline? It’s skewed by a handful of large publishers hit by algorithm updates. Publishers who abandon SEO in the face of this panic are making a self-fulfilling prophecy, ceding traffic to competitors who keep optimizing.
He’s right. And he’s looking at the wrong problem.
Humans are still clicking Google results. What has changed is that a growing share of your visitors isn’t human at all.
That number includes everything from scrapers to brute-force login bots. But the fastest-growing segment is AI crawlers.
AI crawlers now represent 51.69% of all crawler traffic, surpassing traditional search engine crawlers at 34.46%, Cloudflare’s 2025 Year in Review found. AI bot crawling grew more than 15x year over year. Cloudflare observed roughly 50 billion AI crawler requests per day by late 2025.
Akamai’s data tells a similar story: AI bot activity surged 300% over the past year, with OpenAI alone accounting for 42.4% of all AI bot requests.
So while Adams is correct that human Google traffic hasn’t collapsed, something else is happening on the other side of the server logs.
Anthropic’s ClaudeBot crawls 23,951 pages for every single referral it sends back to a website. OpenAI’s GPTBot: 1,276 to 1. Training now drives nearly 80% of all AI bot activity, up from 72% the year before.
Compare that to traditional Googlebot, which has always operated on a crawl-and-send-traffic-back model. Google crawls your site, indexes it, and sends 831x more visitors than AI systems. The deal was simple: let me read your content, and I’ll send you people who want it.
Google’s newer AI Mode is worse. Semrush data shows a 93% zero-click rate in those sessions. AI Overviews now trigger on roughly 25-48% of U.S. searches, depending on the dataset, and that number keeps climbing.
And when Google’s AI features do cite sources, they’re increasingly citing themselves. Google.com is the No. 1 cited source in 19 of 20 niches, accounting for 17.42% of all citations, an SE Ranking study of over 1.3 million AI Mode citations found. That tripled from 5.7% in June 2025. Add YouTube and other Google properties, and they make up roughly 20% of all AI Mode sources.
So the old deal is being rewritten even by Google. AI crawlers from other companies skip the pretense entirely: let me read your content so I can answer questions about it without ever sending anyone your way.
The agentic shift
The bot traffic numbers are already here. The next wave is bigger: AI agents acting on behalf of humans.
In 2024, Gartner predicted that traditional search engine traffic would drop 25% by 2026 as AI chatbots and agents handle queries. That prediction is tracking. Its October 2025 strategic predictions go further: 90% of B2B buying will be AI-agent intermediated by 2028, pushing over $15 trillion in B2B spend through AI agent exchanges.
This isn’t theoretical.
Salesforce reported that AI agents influenced 20% of all global orders during Cyber Week 2025, driving $67 billion in sales.
Retailers with AI agents saw 13% sales growth compared to 2% for those without.
Gartner says 40% of enterprise applications will have task-specific AI agents by the end of 2026, up from less than 5% in 2025. eMarketer projects AI platforms will drive $20.9 billion in retail spending in 2026, nearly 4x 2025 figures.
Think about what that looks like in practice. An AI agent researches vendors for a procurement team. It doesn’t see your hero banner. It doesn’t notice your trust badges. It reads your structured data, compares your specs to those of three competitors, and builds a shortlist.
That “visit” might show up in your analytics as a bot hit with a zero-second session duration. Or it might not show up at all.
So what do you optimize for when the visitor is a machine making decisions for a human?
It’s not the same as traditional SEO. And it’s not the same as the AI Overviews optimization most people are focused on right now. AI Overviews are still Google. Still one search engine, still largely the same ranking infrastructure, still (mostly) one answer format.
Agentic SEO is about being useful to software that’s pulling from search APIs, crawling directly, and using LLM reasoning to make recommendations. That software doesn’t care about your page layout. It cares about whether it can extract what it needs.
I think a few things start to matter a lot more.
Structured data becomes load-bearing
Schema markup has always been a “nice to have” for rich snippets. When an AI agent compares your product to three competitors, structured data lets it read your specs without having to guess. Think product schema, FAQ schema, and pricing tables in clean HTML. These go from SEO hygiene to core infrastructure.
AI agents don’t search for “best CRM for small business.” They ask compound questions: “Which CRM under $50/user/month integrates with QuickBooks and has a mobile app with offline capability?” If your content only answers the first version, you’re invisible to the second.
Freshness and accuracy get audited differently
A human might not notice your pricing page is 8 months stale. An AI agent cross-referencing your pricing against competitors will flag the discrepancy. Or worse, use the outdated number in its recommendation and cost you the deal.
Blocking AI crawlers feels protective, but it means AI agents can’t recommend you. Allowing them means your content trains models that may never send you traffic. There’s no clean answer.
But pretending it’s just a technical setting is a mistake. New IETF standards are emerging to give publishers more granular control, but they’re not widely adopted yet.
Most analytics setups can’t tell the difference between a human visit, a bot crawl, and an AI agent evaluating your site on someone’s behalf. GA4 filters most bot traffic. Server logs show the raw picture, but take work to parse. Even then, figuring out whether an AI agent’s visit led to an actual sale is basically impossible right now.
This is where the “Google Zero” framing does real damage.
If you’re only measuring organic sessions from Google, you’re blind to a channel that doesn’t show up in that number. Your traffic could look stable while an AI agent steers $50,000 in annual spend to your competitor because their product schema was more complete.
I don’t think we have good measurement for this yet. Nobody does. But ignoring the problem because Google sessions look fine is like checking your print ad response rate in 2005 and deciding the web wasn’t worth paying attention to.
I don’t have a playbook for this. It’s too new. But I can tell you what we’re doing at our agency.
Audit your structured data like it’s your storefront: Evaluate whether your website’s schema is present and well-formed. Look into structured data, content structure, and technical health. Make sure product, service, FAQ, and organization markup is complete, accurate, and current. This is table stakes.
Answer compound questions: Look at your top landing pages. Do they answer the specific, multi-variable questions an AI agent would ask? Or just the broad keyword query a human would type?
Check your server logs: Look for GPTBot, ClaudeBot, PerplexityBot, and other AI user agents. Understand how much of your traffic is already non-human. If you’re on Cloudflare, their bot analytics dashboard makes this easy without parsing raw logs. You’ll probably be surprised either way.
Make a conscious robots.txt decision: Understand the trade-offs, and make it a business decision with your leadership team.
Start tracking AI citations: Tools like Semrush, Scrunch, DataForSEO, and others can show when AI platforms mention your brand. The data is directional, not precise. But it’s better than nothing.
Don’t abandon Google SEO: Adams is right that Google traffic is still massive and still valuable. The agentic web doesn’t replace Google. It adds a new layer. You need both.
The real question
The “Google Zero” argument pits one extreme against another, even as the actual shift is quieter and more important.
The web is becoming a place where the majority of visitors are machines. Some send traffic back. Most don’t. Some of them make purchasing decisions on behalf of humans. That number is growing fast.
The SEOs who do well here won’t be the ones arguing about whether Google traffic moved 2.5%. They’ll be the ones who figured out how to be useful to both human visitors and the AI agents acting on their behalf.
We’ve spent 25 years optimizing for how humans find things. Now we need to figure out how machines find things for humans.
That’s not Google Zero. We don’t have a name for it yet. But it’s already here.
If you want to go deeper on GEO and agentic SEO, I’m teaching an SMX Master Class on Generative Engine Optimization on April 14. It covers structured data implementation, AI visibility measurement, content optimization for AI systems, and the practical side of everything in this article.
LinkedIn is one of the most powerful platforms for recruiting top-tier talent. It’s also one of the easiest places to waste budget if campaigns aren’t structured correctly.
Many recruitment campaigns fail because they prioritize visibility over intent. More impressions don’t equal better hires. Broad targeting and generic messaging often lead to an influx of unqualified applicants, driving up cost-per-hire and slowing down hiring timelines.
The most effective LinkedIn recruitment strategies focus on one thing: attracting and converting high-intent candidates while filtering out poor-fit applicants before they ever click. Let’s break down exactly how to do that.
Shift your strategy: Optimize for intent vs. reach
The biggest mistake advertisers make on LinkedIn is targeting based solely on job titles, industries, and years of experience.
While this may generate volume, it rarely produces efficiency. Instead, high-performing campaigns are built around intent-based targeting — reaching candidates who are qualified and more likely to consider a new opportunity.
This requires a layered approach:
Core fit: Job titles, skills, and certifications.
Behavioral signals: Open-to-work status, group memberships, and engagement with industry content.
Career friction indicators: Burnout-prone roles, companies experiencing layoffs, and limited growth environments.
By combining these layers, you move beyond “who they are” and begin targeting why they might be ready to make a change — which is where real performance gains happen.
Your ad creative isn’t just there to attract attention. It should actively filter your audience. One of the most effective ways to control cost-per-hire is to discourage unqualified candidates from clicking in the first place.
Strong recruitment ads follow a structured approach:
Call out a specific pain point or identity: “Burned out from long shifts in healthcare?”
Clearly define who the role is for: “This role is designed for licensed RNs with 3+ years of experience.”
Highlight meaningful value: Think flexibility, compensation, career growth, or mission.
Set expectations upfront: “Not an entry-level position” or “Requires managing enterprise accounts.”
This combination of attraction and exclusion ensures that the candidates who do click on your ads are far more likely to convert.
Messaging: Career upgrades, better lifestyle, growth opportunities.
Outcome: Scalable pipeline of qualified candidates.
Cold passive talent (top funnel)
These are long-term potential candidates to start building your pipeline, with the intent to move them to the middle of the funnel and eventually the bottom of the funnel.
Target: Broader audiences and lookalikes.
Messaging: Employer brand, culture, “day in the life.”
Outcome: Reduces future acquisition costs over time.
Control costs through smarter bidding and optimization
LinkedIn’s ad platform can quickly become expensive without proper controls. Start with manual CPC bidding to maintain control, then test automated delivery once performance data is established.
More importantly, optimize for the right metrics. Focus on qualified applications instead of clicks. Track downstream actions, such as interview and hire rates.
Be prepared to make fast decisions. Ads with high click-through rates but low application rates often indicate poor alignment. Ads that generate many applications, but few interviews signal weak pre-qualification.
Efficiency comes from eliminating wasted spend earlier, rather than later. It conserves ad spend and minimizes overlapping audiences and hitting the wrong targets.
Improve conversion rates with a two-step application process
A common but costly mistake is sending candidates directly to long, complex application forms. Instead, use a two-step funnel:
Pre-qualification landing page.
Role overview and expectations.
Compensation transparency.
Clear “who this is (and isn’t) for.”
Application.
Short form or LinkedIn Easy Apply.
This approach sets expectations, filters candidates, and significantly improves application quality — often reducing cost-per-hire by 30-50%.
Use retargeting to capture missed opportunities
Not every qualified candidate applies on the first interaction. Retargeting allows you to re-engage high-intent users who have already shown interest.
Build audiences from:
Career page visitors.
Job post viewers.
Video viewers (50%+ engagement).
Then serve follow-up messaging such as:
“Still considering a move?”
“Last chance to apply”
Employee testimonials or success stories.
Retargeting campaigns are often the most cost-efficient part of your entire strategy.
Advanced strategies to increase ROI
Once the fundamentals are in place, there are several advanced tactics that can further improve performance:
Competitor targeting: Target employees at competing companies and position your opportunity as a clear upgrade — whether through compensation, flexibility, or culture.
Skill-based campaign segmentation: Instead of grouping all candidates together, build campaigns around specific skills or certifications. This reduces competition in the ad auction and often lowers cost-per-click.
Selective use of Message Ads: Message ads can be effective for senior or hard-to-fill roles — but only when targeting is highly refined. Otherwise, they can quickly become cost-prohibitive.
Here’s an example of a successful LinkedIn InMail message that recently drove over 70% high-intent applications for an HVAC sales client:
Message body:
Hi [First Name],
This might be a stretch — but your background in HVAC sales caught my attention.
We’re hiring experienced sales reps who are tired of unpredictable commissions and weekend-heavy schedules.
This role is built for reps who:
Have 3+ years in HVAC or home services sales
Are comfortable running in-home consultations
Want a more stable, high-earning structure
What’s different:
No weekend appointments
Pre-qualified, inbound leads (no cold knocking)
Six-figure earning potential with consistency
That said, this isn’t a fit for entry-level reps or those new to sales.
If you’d be open to a quick 10-minute conversation to see if it’s worth exploring, I’m happy to share more.
If not, no worries at all — appreciate you taking a look.
— [Name]
Stating upfront the need for “experienced sales reps” immediately establishes relevance and increases response rates while reducing irrelevant replies.
Focusing on what matters to potential candidates, such as no weekend appointments and compensation structure, speaks to the audience’s needs versus the company’s.
Closing the conversation with the reminder that this isn’t an entry-level position weeds out wasted conversations and reduces cost-per-hire.
The most effective LinkedIn recruitment campaigns rely on better strategy.
When you focus on intent-based targeting, pre-qualification within ad creative, funnel segmentation, and conversion optimization, you create a system that attracts the right candidates while minimizing wasted spend.
Ultimately, reducing cost-per-hire is about reaching the right people, at the right time, with the right message.
YouTube used its NewFront presentation to unveil a significant upgrade to its Creator Partnerships platform, adding Gemini-powered creator matching, stronger measurement tools, and new ways to run creator content as paid ads.
Why we care. Influencer marketing has become a core part of many brands’ strategies, but finding the right creators at scale and proving ROI is a pain point. tackles influencer marketing’s two biggest friction points — finding the right creator and proving ROI.
Gemini-powered matching cuts through the noise of three million creators, while the ability to run creator content as paid Shorts and in-stream ads makes performance measurable like any standard campaign, backed by a reported 30% conversion lift.
How it works. The updated platform uses Gemini to recommend creators from a pool of more than three million YouTube Partner Program members, filtered by campaign goals. Advertisers get more control over who they work with and better visibility into how those partnerships perform.
The big new feature. A revamped Creator Partnerships boost lets brands run creator-made content directly as Shorts and in-stream ads — formats YouTube says deliver an average 30% lift in conversions.
The big picture. The announcement builds on BrandConnect, YouTube’s existing creator monetization infrastructure, showing that the platform is doubling down on the creator economy as a growth lever for advertisers — not just a content strategy.
Reddit ranks as the most-cited domain in AI-generated answers, followed by YouTube and LinkedIn, based on a new analysis of 30 million sources by Peec AI, an AI search analytics tool.
The findings. Reddit was the most-cited source across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews. YouTube, LinkedIn, Wikipedia, and Forbes also ranked in the top five. Review platforms like Yelp and G2 appeared often in recommendation queries.
The research showed which domains models rely on:
ChatGPT favored Wikipedia, Reddit, and editorial sites like Forbes.
Google leaned toward platforms like Facebook and Yelp.
Perplexity emphasized Reddit, LinkedIn, and G2 for B2B queries.
Why we care. To win in AI search, you need authority beyond your site. Brands that appear consistently across trusted third-party platforms are more likely to be cited.
Why these sources? AI systems prioritize perceived authority plus authentic user input:
Reddit leads because it captures real user discussions.
YouTube dominates video citations via transcripts and descriptions.
Wikipedia serves as both a live source and a training dataset.
About the data. The analysis covered 30 million sources across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews, measuring domains directly cited in answers to isolate what shapes responses.
A newly published, unverified report claims Google’s Gemini AI is instructed to mirror user tone and validate emotions while grounding its responses in fact and reality.
Why we care. If accurate, AI-generated search responses may vary based on how a query is phrased — not just the information available.
What’s new. The report centers on the inherent tension in the system-level instructions guiding how Gemini responds. The report, published by Elie Berreby, head of SEO and AI search at Adorama, suggested that Gemini is instructed to:
Match the user’s tone, energy, and intent.
Validate emotions before responding.
Deliver answers aligned with the user’s perspective.
What it means. The “overly supportive mandate frequently overrides the factual grounding,” Berreby wrote. So instead of acting as a neutral aggregator, AI answers may:
Reinforce negative framing (“Why is X bad?”).
Reinforce positive framing (“Why is X great?”).
If public perception is negative, AI may amplify it. As the report suggests:
AI reflects existing sentiment signals.
It doesn’t “balance” them the way blue links often do.
Query framing. The emotional framing of a query affects:
Which sources get cited.
How summaries are written.
The overall tone of the answer.
Google’s AI Overviews already show tone shifts, often aligning with query intent beyond keywords. This report offers a possible explanation.
Unverified. Google hasn’t confirmed the leak. As Berreby noted in his report: “I’ve decided to share only a fraction of the leaked internal system information with the general public. I’m not sharing any sensitive data. This isn’t a zero-day exploit. This is a tiny leak.”
Google is giving retailers more firepower to promote loyalty program benefits directly within product listings — expanding the program internationally and into its newest AI-powered shopping experiences.
What’s new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads — making it easier to promote in-store or geography-specific perks.
Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery — rather than requiring a separate loyalty app or webpage — makes programs more visible and more likely to drive sign-ups.
By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.
The big picture. Loyalty benefits will now appear on Google’s AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.
Where it’s available. The expansion covers 14 countries — Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.
How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.
Don’t miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings — potentially expanding loyalty reach without additional ad spend.
Googlebot. Google has many more than one singular crawler, it has many crawlers for many purposes. So referencing Googlebot as a singular crawler, might not be super accurate anymore. Google documented many of its crawlers and user agents over here.
Limits. Recently, Google spoke about its crawling limits. Now, Gary Illyes dug into it more. He said:
Googlebot currently fetches up to 2MB for any individual URL (excluding PDFs).
This means it crawls only the first 2MB of a resource, including the HTTP header.
For PDF files, the limit is 64MB.
Image and video crawlers typically have a wide range of threshold values, and it largely depends on the product that they’re fetching for.
For any other crawlers that don’t specify a limit, the default is 15MB regardless of content type.
Then what happens when Google crawls?
Partial fetching: If your HTML file is larger than 2MB, Googlebot doesn’t reject the page. Instead, it stops the fetch exactly at the 2MB cutoff. Note that the limit includes HTTP request headers.
Processing the cutoff: That downloaded portion (the first 2MB of bytes) is passed along to our indexing systems and the Web Rendering Service (WRS) as if it were the complete file.
The unseen bytes: Any bytes that exist after that 2MB threshold are entirely ignored. They aren’t fetched, they aren’t rendered, and they aren’t indexed.
Bringing in resources: Every referenced resource in the HTML (excluding media, fonts, and a few exotic files) will be fetched by WRS with Googlebot like the parent HTML. They have their own, separate, per-URL byte counter and don’t count towards the size of the parent page.
How Google renders these bytes. When the crawler accesses these bytes, it then passes it over to WRS, the web rendering service. “The WRS processes JavaScript and executes client-side code similar to a modern browser to understand the final visual and textual state of the page. Rendering pulls in and executes JavaScript and CSS files, and processes XHR requests to better understand the page’s textual content and structure (it doesn’t request images or videos). For each requested resource, the 2MB limit also applies,” Google explained.
Best practices. Google listed these best practices:
Keep your HTML lean: Move heavy CSS and JavaScript to external files. While the initial HTML document is capped at 2MB, external scripts, and stylesheets are fetched separately (subject to their own limits).
Order matters: Place your most critical elements — like meta tags, <title> elements, <link> elements, canonicals, and essential structured data — higher up in the HTML document. This ensures they are unlikely to be found below the cutoff.
Monitor your server logs: Keep an eye on your server response times. If your server is struggling to serve bytes, our fetchers will automatically back off to avoid overloading your infrastructure, which will drop your crawl frequency.
Podcast. Google also had a podcast on the topic, here it is:
SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.
Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level — owning strategy across search, AI assistants, and paid channels, with clear revenue impact.
What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.
Companies are shifting budget toward strategy as AI tools absorb more execution work.
The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:
Project management appeared in more than 30% of listings.
Communication led non-senior roles at 39.4%.
Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
Technical SEO appeared in about 6% of listings.
Tools and channels. The SEO tech stack now spans analytics, paid media, and data.
Google Analytics appeared in up to 47.7% of listings.
Google Ads appeared in 29% of listings.
SQL demand grew at the senior level.
AI tools like ChatGPT were increasingly listed.
AI expectations: AI literacy is moving from optional to expected:
31% of senior roles mentioned AI.
Nearly 10% referenced LLM familiarity.
AI search concepts like AI search and AEO appeared more often.
Pay and positioning: SEO is increasingly treated as a business function.
The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
Degree preferences skewed toward business and marketing.
Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.
About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.
Technical SEO extends beyond indexing to how content is discovered and used, especially as AI systems generate answers instead of listing pages.
For generative engine optimization (GEO), the underlying tools and frameworks remain largely the same, but how you implement them determines whether your content gets surfaced — or overlooked.
That means focusing on how AI agents access your site, how content is structured for extraction, and how reliably it can be interpreted and reused in generated responses.
Agentic access control: Managing the bot frontier
From a technical standpoint, robots.txt is a tool you already use in your SEO arsenal. You need to add the right crawlers within your files to allow specific bots their own rights.
For example, you may want a training model like GPTBot to have access to your /public/ folder, but not your /private/ folder, and would need to do something like this:
You’ll also need to decide between model training and real-time search and citations. You might consider disallowing GPTBot and allowing OAI-SearchBot.
Within your robots.txt, you also need to consider Perplexity and Claude standards, which are tied to these bots:
Claude
ClaudeBot (Training)
Claude-User (Retrieval/Search)
Claude-SearchBot
Perplexity
PerplexityBot (Crawler)
Perplexity-User (Searcher)
Adding to your agentic access is another new protocol — llms.txt, a markdown-based standard that provides a structured way for AI agents to access and understand your content.
While it’s not integrated into every agent’s algorithm or design, it’s a protocol worth paying attention to. For example, Perplexity offers an llms.txt that you can follow here. You’ll come across two flavors of llms.txt:
llms.txt: A concise map of links.
llms-full.txt: An aggregate of text content that makes it so that agents don’t have to crawl your entire site.
Even if Google and other AI tools aren’t reading llms.txt, it’s worth adapting for future use. You can read John Mueller’s reply about it below:
Extractability: Making content ‘fragment-ready’
GEO focuses more on chunks of information, or fragments, to provide precise answers. Bloat is a problem with extractability, which means AI retrieval has issues with:
JavaScript execution.
Keyword-optimized content rather than entity-optimized content.
Weak content structures that fail to provide clear, concise answers.
You want your core content visible to users, bots, and agents. Achieving this goal is easier when you use semantic HTML, such as:
<article>
<section>
<aside>
The goal? Separate core facts from boilerplate content so your site shows up in answer blocks. Keep your context window lean so AI agents can read your pages without truncation. Creating content fragments will feed both search engines and agentic bots.
Structured data: The knowledge graph connective tissue
Schema.org has been a go-to for rich snippets, but it’s also evolving into a way to connect your entities online. What do I mean by this? In 2026, you can (and should) consider making these schemas a priority:
Organization and sameAs: A way to link your site to verified entities about you, such as Wikipedia, LinkedIn, or Crunchbase.
FAQPage and HowTo: Sections of low-hanging fruit in your content, such as your FAQs or how-to content.
SignificantLink: A directive that tells agents, “Hey, this is an authoritative pillar of information.”
Connecting information and data for agents makes it easier for your site or business to be presented on these platforms. Once you have the basics down, you can then focus on performance and freshness.
AI is constantly scouring the internet to maintain a fresh dataset. If the information goes stale, the platform becomes less valuable to users, which is why retrieval-augmented generation (RAG) must become a focal point for you.
RAG allows AI models, like ChatGPT, to inject external context into a response through a prompt at runtime. You want your site to be part of an AI’s live search, which means following the recommendations from the previous sections. Additionally, focus on factors such as page speed, server response time, and errors.
In addition to RAG, add “last updated” signals for your content. <time datetime=””> is one way to achieve this, along with schema headers, which are critical components for:
News queries.
Technical queries.
You can now start measuring your success through audits to see how your efforts are translating into real results for your clients.
You have everything in place and ready to go, but without audits, there’s no way to benchmark your success. A few audit areas to focus on are:
Citation share: Rankings still exist, but it’s time to focus on mentions as well. You can do this manually, but for larger sites you’ll want to use tools like Semrush.
Log file analysis: Are agents hitting your site? If so, which agents are where? You can do this through log analysis and even use AI to help parse all of the data for you.
The zero-click referral: Custom tracking parameters can help you identify traffic origins and “read more” links, but they only paint part of the picture. You also need to be aware that agents may append your parameters, which can impact your true referral figures.
Measuring success shows you the validity of your efforts and ensures you have KPIs you can share with clients or management.
Scaling GEO into 2027
Preparing your GEO strategy for 2027 requires changes in how you approach technical SEO, but it still builds on your current efforts. You’ll want to automate as much as you can, especially in a world with millions of custom GPTs.
Manual optimization? Ditch it for something that scales without requiring endless man-hours.
Technical SEO was long the core of ranking a site and ensuring you provided search bots and crawlers with an asset that was easy to crawl and index.
Now? It’s shifting.
Your site must become the de facto source of truth for the world’s models, and this is only possible by using the tools at your disposal.
Start with your robots.txt and work your way up to structure, fragmented data, and extractability. Audit your success over time and keep tweaking your efforts until you see positive results. Then, scale with automation.
In 1998, submitting a website to search engines was manual, methodical, and genuinely tedious. I remember 17 of them: AltaVista, Yahoo Directory, Excite, Infoseek, Lycos, WebCrawler, HotBot, Northern Light, Ask Jeeves, DMOZ, Snap, LookSmart, GoTo.com, AllTheWeb, Inktomi, iWon, and About.com.
Each had its own form, process, and wait time, and its own quiet judgment about whether your URL was worth including. We submitted manually, 18,000 pages in all. Yawn.
Google was barely a year old when we were doing this. But they were already building the thing that would make submission irrelevant.
PageRank meant Google followed links, and a site that other sites linked to would be found whether it submitted or not. The other 17 engines waited to be told about content. Google went looking, and within a few years, they got so good at finding content that manual submission became the exception rather than the norm.
You published, you waited, the bots arrived. For 20 years, that was the deal, and SEO optimized for a crawler that would show up sooner or later.
The irony is that we’re now shifting back. Not because Google got worse at finding things, but because the game has expanded in ways that pull alone can’t cover, and the revenue flowing through assistive and agentic channels doesn’t wait for a bot.
Pull isn’t the only entry mode
The pull model (bot discovers, selects, and fetches) remains the dominant entry mode for the web index. What’s changed is that pull is now one of five entry modes into the AI engine pipeline (the 10-gate sequence through which content passes before any AI system can recommend it), not the only one.
The pipeline has expanded, and new modes have been added alongside the existing model rather than replacing it, and the single entry mode that has been the norm for 20 years has expanded to five.
What follows is my taxonomy of those five modes, with an explanation of the advantages each one gives you at the two gates that determine whether content can compete: indexing and annotation.
The five entry modes differ by gates skipped, signal preserved, and revenue reached
Mode 1: Pull model
Traditional crawl-based discovery where all 10 pipeline gates apply and the bot decides everything. You start at gate zero and have no structural advantage by the time your content gets to annotation (which is where that content starts to contribute to your AI assistive agent/engine strategy). You’re entirely dependent on the bot’s schedule and the quality of what it finds when it arrives.
Mode 2: Push discovery
The brand proactively notifies the system that content exists or has changed, through IndexNow or manual submission.
Fabrice Canel built IndexNow at Bing for exactly this purpose: “IndexNow is all about knowing ‘now.’” It skips discovery, improves the chances of selection, and gets you straight to crawl. The content still needs to be crawled, rendered, and indexed, because IndexNow is a hint, not a guarantee.
You win speed and priority queue position, which means your content is eligible for recommendation days or weeks earlier than a competitor who waited for the bot. In fast-moving categories, that window is the difference between being in the answer and being absent from it.
Note: WebMCP helps with Modes 1 and 2 by making crawling, rendering, and indexing more reliable, retaining signal and confidence that would otherwise be lost through those three gates.
Because confidence is multiplicative across the pipeline, a higher passage rate at crawling, rendering, and indexing means your content arrives at annotation with significantly more surviving signal than a standard crawl delivers. The structural advantage compounds from there.
Mode 3: Push data
Structured data goes directly into the system’s index, bypassing the entire bot phase. Google Merchant Center pushes product data with GTINs, prices, availability, and structured attributes. OpenAI’s Product Feed Specification powers ChatGPT Shopping that supports 15-minute refresh cycles.
Discovery, selection, crawling, and rendering don’t exist for this content, and the “translation” at the indexing phase is seamless: it arrives at indexing already in machine-readable format, four gates skipped and one improved. That means the annotation advantage is significant.
This is where the money is for product-led businesses: where crawled content arrives as unstructured prose the system has to interpret and feed content arrives pre-labeled with explicit machine-readable entity type, category, and attributes. By structuring the data and injecting directly into indexing, you’re solving a huge chunk of the classification problem at annotation, which, as you’ll see in the next article, is the single most important step in the 10-gate sequence.
As the confidence pipeline shows, each gate that passes at higher confidence compounds multiplicatively, so this is where you can get the “3x surviving-signal advantage” I outline in “The five infrastructure gates behind crawl, render, and index.”
Mode 4: Push via MCP
Model Context Protocol (MCP) — a standard that lets AI agents query a brand’s live data during response generation — allows agents to retrieve data from brand systems on demand.
In February 2026, four infrastructure companies shipped agent commerce systems simultaneously. Stripe, Coinbase, Cloudflare, and OpenAI collectively wired a real-time transactional layer into the agent pipeline, live with Etsy and 1 million Shopify merchants.
Agentic commerce is key. MCP skips the entire DSCRI pipeline and then operates at three levels, each entering the pipeline at a different gate:
As a data source at recruitment.
As a grounding source at grounding.
As an action capability at won, where the transaction completes without a human in the loop.
The revenue consequences are already real: brands without MCP-ready data are losing transactions to those with it, because the agent can’t access their inventory, pricing, or availability in real time when it needs to make a decision. This is where you see multi-hundred percent gains in the surviving signal.
MCP is already simultaneously push and pull, depending on context.
There’s a dimension to Mode 4 that most people don’t think about much: the agent querying your MCP connection isn’t always a Big Tech recommendation system. It’s increasingly the customer’s own AI, acting as their purchasing agent, evaluating your inventory and pricing in real time, with their credit card behind the query, completing the transaction without them opening a browser.
When your customer’s agent (let’s say OpenClaw-driven) comes knocking, agent-readable is the entry requirement. Agent-writable — the capacity for an agent to act, not just retrieve — is where you’ll make the conversion. The brands without writable infrastructure will be losing transactions to competitors whose systems answered the query and handled the action.
Mode 5: Ambient
This is structurally different from the other four. Where Modes 1 through 4 change how content enters the pipeline, ambient research changes what triggers execution of the final gates.
The AI proactively pushes a recommendation into the user’s workflow without any query: Gemini suggesting a consultant in Google Sheets, a meeting summary in Microsoft Teams surfacing an expert, and autocomplete recommending your brand.
Ambient is the reward for reaching recruitment with accumulated confidence high enough that the system fires the execution gates on the user’s behalf, without being asked. You can’t optimize for ambient directly. You earn it — and the brands that earn it capture the 95% of the market that isn’t actively searching.
Several people have told me my obsession with ambient is misplaced, theoretical, and not a real thing in 2026. I’ve experienced it myself already, but the clearest demonstration came at an Entrepreneurs’ Organization event where I was co-presenting with a French Microsoft AI specialist.
He demonstrated on Teams an unprompted push recommendation: a provider identified as the best solution to a problem his team had been discussing in the meeting. Nobody explicitly asked. Copilot listened, understood the problem, evaluated options, and push-recommended a supplier right after the meeting. Ambient isn’t theoretical. It’s running on Teams, Gmail, and other tools we all use daily, right now.
Five entry modes, each with a different starting point, and they all converge at annotation. Annotation is the key to the entire pipeline. Every algorithm in the algorithmic trinity (LLM + knowledge graph + search) doesn’t use the content itself to recruit, it uses the annotations on your chunked content, and nothing reaches a user without being recruited.
Why is that important? Because accurate, complete, and confident annotation drives recruitment, and recruitment is competitive regardless of how content entered. A product feed arriving at indexing with zero lost signal competes at recruitment with a huge advantage over every crawled page, every other feed, and every MCP-connected competitor that entered by a different door.
You control more of this competition than most practitioners assume, but skipping gates gives you a structural advantage in surviving signal. It doesn’t exempt you from the competition itself.
That distinction matters here because annotation sits at the boundary. It’s the last absolute gate: the system classifies your content based on your signals, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated. That makes annotation the last moment in the pipeline where you have the field entirely to yourself.
From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in a winner-takes-all race. Get annotation right, and you have a significant head start. Get it wrong, and no matter how much work you do to improve recruitment, grounding, or display, it will not catch up, because the misclassification and loss of confidence compound through every gate downstream.
Nobody in the industry was talking about this in 2020. I started making the point then, after a conversation on the record with Canel, and it still isn’t getting the attention it deserves.
Annotation is your last chance before competition arrives.
Search is one of three ways users encounter brands — and it’s the least valuable
The research modes on the user’s side have expanded, too. The SEO industry has traditionally focused on just one: implicit, when the user types a query. There was always one more: explicit brand queries, and now we have a third. Each research mode is defined by who initiates and what the user already knows.
Explicit research is the deliberate query, where the user asks for a specific brand, person, or product, and the system returns a full entity response (the AI résumé that replaces the brand SERP).
This is the lowest-confidence mode of the three, because the user has already signaled very explicit intent: you’re only reaching people who already know your name. Bottom of the funnel, decision. Algorithmic confidence is important here to remove hedging (“they say on their website,” “they claim to be…”) and replace it with absolute enthusiasm (“world leader in…,” “renowned for…”).
Implicit research removes the explicit query. The AI introduces the brand as a recommendation (or advocates for you) within a broader answer, and the user discovers the brand because the system considers it relevant to the conversation, staking its own credibility on the inclusion. Top- and mid-funnel, awareness and consideration. Algorithmic confidence is vital here to beat the competition and get onto the list when a user asks “best X in Y market” or be cited when a user asks “explain topic X.”
Ambient research requires the highest confidence of all. The system pushes the brand into the user’s workflow with no query, no explicit request, the algorithm is making a unilateral decision that this user, in this context, at this moment, needs to see your brand. That requires very significant levels of algorithmic confidence.
The format is small: a sentence, a credential, a contextual mention. The audience reached is the largest: people not yet in-market, not yet actively looking, who encounter your brand because the AI decided they should. And the kicker is that your brand gets the sale before the competition even starts.
For me, this is the structural insight that inverts how most brands prioritize, and where the real money is hiding. They optimize for implicit research, where competition is highest, the target you need to hit is widest, and the work is hardest.
Most SEOs underestimate explicit research (where profitability is highest) and completely ignore ambient, which reaches the 95% who aren’t yet looking and requires the deepest entity foundation to trigger. I call this the confidence inversion, first documented in May 2025: the smallest format requires the highest investment, and it reaches the most valuable audience.
The entity home website is the single source that feeds every mode
In 2019, AI engineers spent 80% to 90% of their time collecting, cleaning, and labeling data, and the remaining 10% to 20% on the work they actually wanted to do. They wryly called themselves data janitors. Today, Gartner estimates 60% of enterprises are still effectively stuck in the 2019 model, manually scrubbing data, while the companies that got organized early compound their advantage.
The same split is happening with brand content and entity management, for the same reason. Every push mode described in this article draws on data: product attributes for merchant feeds, structured entity data for MCP connections, and corroborated identity claims for ambient triggering.
If that data lives in scattered, inconsistent, contradictory sources, every push attempt is expensive to implement, structurally weak on arrival, and liable to contradict the previous one. Inconsistency is the annotation killer: the system encounters two different versions of who you are from two different push moments, and confidence drops accordingly.
The framing gap, where your proof exists but the algorithm can’t connect it to a coherent entity model, is a direct consequence of disorganized data, and it costs you in recommendation frequency every day it persists.
The entity home website — the full site structured as an education hub for algorithms, bots, and humans simultaneously, built around entity pillar pages that declare specific identity facets — becomes the single source that feeds every mode simultaneously.
Pull, push discovery, push data, MCP, and ambient all draw from the same clean, consistent, non-contradictory data. You build the structure once, maintain it in one place, and you’re ready for push and pull modes today, and any to come that don’t yet exist.
AI handles 80%, humans protect the other 20%
That foundation is only as strong as the corrections made to it. How this works in practice depends on where you’re starting from. For enterprises, the website typically mirrors an internal data structure that already exists:
Product catalogs.
CRM records.
Service definitions.
Organizational hierarchies.
The website becomes the public representation of structured data that lives inside the business, and the primary challenge is integration and maintenance.
For smaller businesses and personal brands, the direction often runs the other way: building the entity home website well is what forces you to figure out how your business is actually structured, what you genuinely offer, who you serve, and how everything connects. The website imposes discipline.
We’re doing exactly this: centralizing everything as the structured data representation of the entire brand (personal or corporate). Getting the foundation right (who we are, what we offer, who we serve) is generally the heaviest lift. Building N-E-E-A-T-T credibility on top of that foundation is now comparatively straightforward, and every new push mode draws from the same organized source.
Here’s where using AI fits into this work. It can handle roughly 80% of the organization: extracting structure from existing content, proposing taxonomies, drafting entity descriptions, mapping relationships, and flagging gaps. What it does poorly, and what humans need to correct, are the three failure modes that propagate silently through every downstream gate:
Factual errors, where something is simply wrong.
Inaccuracies, where something is approximately right but imprecise enough to mislead.
Confusions, where two different concepts are conflated, or an entity is ambiguous between interpretations.
Confusion is the sneakiest because it looks like data, passes automated quality checks, enters the pipeline with apparent confidence, and then causes annotation to misclassify in ways that compound through every gate downstream.
Alongside the errors sit the missed opportunities, which are equally costly and considerably less obvious:
Lost N-E-E-A-T-T credibility opportunities, where the systems underestimate or undervalue the entity because credibility signals exist but aren’t structured, corroborated, or framed in a way the algorithmic trinity can read. The authority exists, but the machine doesn’t understand it.
Annotation misclassification, where the entity is indexed coherently but placed in the wrong category, meaning it competes for the wrong queries entirely and never appears in the contexts where it should win. Correctly classified competitors take the recommendation: your brand is present in the pipeline, but absent from the competition that matters to your business.
Untriggered deliverability, where understandability is solid and credibility has crossed the trust threshold, but topical authority signals haven’t accumulated densely enough to push the entity across the deliverability threshold for proactive recommendation. The machine knows who you are and trusts you. It just doesn’t advocate for you yet.
The human doing the correction and optimization work is the competitive advantage. Because the errors are surreptitious and the opportunities non-obvious, the trick is finding where both actually are, fixing one, and acting on the other.
The errors are surreptitious. The opportunities are non-obvious. Finding both is the work that compounds.
Organize once, feed every mode that exists and every mode to come
The push layer is expanding. The brands that organize their data now — not perfectly, but consistently, and with a system for maintaining it — are building the infrastructure from which every current and future entry mode draws.
The brands still publishing and waiting for the bot (Mode 1) are optimizing for the least advantageous mode in a five-mode landscape, and that disadvantage gap widens with every passing cycle.
This is the seventh piece in my AI authority series.
OpenAI now allows users of ChatGPT to share their device location so that ChatGPT can know more precisely where the user is and serve better answers and results based on that location.
The feature is called location sharing, OpenAI wrote, “Sharing your device location is completely optional and off until you choose to enable it. You can update device location sharing in Settings > Data Controls at any time.”
What it does. If ChatGPT knows your location, it can return better local results. OpenAI wrote:
“Precise location means ChatGPT can use your device’s specific location, such as an exact address, to provide more tailored results.”
“For example, if you ask “what are the best coffee shops near me?”, ChatGPT can use your precise location to provide more relevant nearby results. On mobile devices, you can choose to toggle off precise location separately while keeping approximate device location sharing on for additional control.”
Privacy. OpenAI said “ChatGPT deletes precise location data after it’s used to provide a more relevant response.” Here is how ChatGPT uses that information:
“If ChatGPT’s response includes information related to your specific location, such as the names of nearby restaurants or maps, that information becomes part of your conversation like any other response and will remain in your chat history unless you delete the conversation.”
Does it work. Does this work? Well, maybe not as well as you’d expect. Here is an example from Glenn Gabe:
I shared about the "Near Me ChatGPT Update" the other day and just let ChatGPT use my device location. This is supposed to enhance results for local queries. I just asked for the "best steakhouses near me" and several of the restaurants are ~45 minutes away. Both restaurants… pic.twitter.com/gRkMeuzMQt
Why we care. Making ChatGPT local results better is a bit deal in local search and local SEO. Knowing the users location and better yet, precise location, can result in better local results.
Hopefully this will result in ChatGPT responding with more useful local results for users.
Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.
We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.
The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.
It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.