Google Ads quietly added an auto-apply setting to its experiments feature — and it’s turned on by default, meaning winning experiment variants can be automatically pushed live without manual review.
How it works. Advertisers can choose between two modes — directional results (the default) or statistical significance at 80%, 85%, or 95% confidence levels. There is one built-in safeguard: if a chosen success metric performs significantly worse in the test arm, the change won’t be automatically applied.
Why we care. Experiments are one of the most powerful tools in a Google Ads account. Automating the apply step could speed up testing cycles, but it also removes a critical checkpoint where advertisers catch unintended consequences before they affect live campaigns.
The catch. Experiments only allow two success metrics. That means a third metric you care about — one you didn’t or couldn’t select — could quietly be declining in the background, and the auto-apply setting would never catch it. The guardrails protect what you told Google to watch, not everything that matters.
The bottom line. The auto-apply feature is a reasonable shortcut for straightforward tests, but for anything consequential, manual review is still worth the extra step. Run the experiment, let it reach significance, then dig into the full data before pulling the trigger yourself.
First seen. This update was spotted by Google Ads specialist Bob Meijer who shared the update on LinkedIn.
Bing appears to be testing a significantly expanded sponsored products section in its shopping search results, featuring a double-rowed carousel that takes up considerably more real estate than its current format.
What was spotted: The test was flagged by Digital Marketer Sachin Patel, who noticed the expanded layout while searching for cushions on Bing. The format pairs a large double-rowed sponsored carousel with organic cards from individual websites beneath it.
Why we care. If this format rolls out broadly, it means significantly more screen space dedicated to sponsored products — which typically translates to higher visibility and more clicks for retailers running Microsoft Shopping campaigns. The double-rowed carousel format is also a more visually competitive layout, putting Bing’s shopping ads closer in prominence to what Google Shopping already offers.
The catch: The test appears to be limited — not all users are seeing it. Search industry veteran Mordy Oberstein checked his own results and got a noticeably more compact layout, suggesting Bing is still in early experimentation mode.
The bottom line: Bing quietly runs a lot of SERP experiments that never make it to full rollout, so this one is worth watching but not banking on. Retailers running Microsoft Shopping campaigns should keep an eye out for any uptick in impressions if the format expands.
First spotted. This test was was spotted by Sachin Paten who shared a screenshot of the test on X.
SEO tools were the most replaced martech application in 2025 — but not for the reason you might expect.
According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.
At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences — all of which challenge traditional keyword tracking and ranking-based workflows.
But the data tells a more nuanced story.
SEO tools: most replaced, but stabilizing
Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.
In other words, they’re now the most commonly replaced — but also more stable than before.
That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.
Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:
CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the survey’s history.
MAPs, email platforms, and CMS tools also declined compared to 2024.
Why SEO tools are being replaced
So if SEO tools aren’t being swapped out due to instability, what’s driving the changes?
The survey points to three primary factors:
1. AI capabilities
For the first time, the survey asked about AI’s role in replacement decisions — and the impact was significant.
37.1% cited AI capabilities as an important factor.
33.9% said they wanted AI capabilities when replacing a tool.
This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:
Content generation and optimization.
SERP analysis and intent modeling.
Workflow automation.
In many cases, replacing your SEO tool isn’t about abandoning SEO — it’s about upgrading to AI-native capabilities.
2. Cost pressures
Cost has become a major driver of martech replacement decisions, including SEO tools:
43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
That’s up sharply from 23% in 2024 and 22% in 2023.
This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.
3. Changing needs in a shifting search landscape
As search behavior changes, so do expectations for SEO platforms.
Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:
Surface insights across AI-driven SERPs
Track visibility beyond clicks
Integrate with broader marketing and data systems
That evolution is likely contributing to replacement activity — even as overall stability increases.
AI is reviving custom-built SEO tools
One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.
Replacing commercial martech tools with homegrown applications accounted for:
8.1% of replacements in 2025
Up from 3.4% in 2024 and 5% in 2023
This marks a meaningful shift after years of near-total reliance on commercial platforms.
“AI-assisted coding is changing the calculus of build vs. buy,” said martech analyst Scott Brinker. “It’s easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.”
For SEO teams, this could mean more organizations building:
Custom data pipelines.
Proprietary SERP tracking systems.
AI-driven analysis tools tailored to their specific needs.
Other martech categories show even greater stability
While SEO tools led in total replacements, the broader martech landscape is becoming more stable.
Several major categories saw declining replacement rates in 2025, including:
CRM platforms (down more than 12% year over year)
Marketing automation platforms
Email distribution tools
Content management systems
This suggests that many organizations are settling into core systems while selectively updating areas — like SEO — that are changing faster.
Methodology
Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.
A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.
AI-powered ad bidding systems are highly sophisticated, but conversion tracking hasn’t kept pace. Ad platforms encourage advertisers to track more actions, while many experts argue for tracking only final outcomes.
Both are partly true. Neither is universally correct.
In practice, both over- and under-signaling can hurt PPC performance. Too many loosely defined micro-conversions introduce noise. Bidding shifts toward easy, low-value actions, inflating reported performance while eroding real results. Too few signals leave the system without enough data to learn.
This dynamic is most visible in Performance Max and Search plus PMax setups, where the system optimizes toward whatever signals it’s given — regardless of whether they reflect real business value.
Here’s what happens when micro-conversions outnumber real conversions, why bidding systems behave this way, and how to build a conversion framework that aligns signal volume with business impact.
The myth of the ‘data-hungry’ PPC algorithm
The idea that algorithms need as much data as possible has been repeated so often that it’s become an assumption. Platform documentation, automated recommendations, and many PPC blog posts reinforce the same message: more signals equal better learning.
Bidding systems require a minimum level of signal density to function, but they don’t benefit from indiscriminate micro-conversion signals. More data isn’t always better data.
Adding low-intent or loosely correlated actions often degrades performance by shifting optimization toward behaviors that don’t correlate with revenue.
Machine learning systems don’t evaluate the strategic relevance of a signal. They evaluate frequency, consistency, and predictability.
When an account includes a mix of high- and low-intent micro-conversions — purchases, add-to-carts, pageviews, video plays, and soft leads — the system doesn’t inherently understand which actions matter most to the business.
Without a clear value hierarchy, the bidding algorithm treats all signals as valid optimization targets. This creates a structural bias toward high-frequency, low-value actions because they’re easier and cheaper to achieve. The result is a bidding pattern that maximizes conversion volume while minimizing business impact.
Why value-based bidding helps, but can’t fix everything
Many practitioners advocate for value-based bidding, where each micro-conversion is assigned a relative financial or hierarchical value. In theory, this helps the system understand which signals matter most. You can also instruct the platform to maximize conversion value, which should push the algorithm toward higher-value purchases or sales-qualified leads (SQLs).
But value-based bidding isn’t a complete solution. When too many micro-conversions are included — even with assigned values — the system can still become overwhelmed. A high volume of low-intent signals can dilute intent and distort the value hierarchy.
The issue isn’t just a lack of context.
Every signal becomes part of the optimization math. If the model weighs signals by volume rather than business importance, low-intent micro-conversions will dominate. Assigning values helps clarify priorities, but it can’t override signal imbalance. At a certain point, the math wins.
How PPC bidding follows the path of least resistance
In practice, this shows up as a “path of least resistance” problem.
Even with values assigned, bidding algorithms still optimize toward the signals they’re given. When low-intent micro-conversions are included as Primary actions, the system treats them as efficient ways to increase conversion volume. This isn’t an error. It’s expected behavior for a model designed to maximize conversions within a set budget.
When those signals occur more frequently, the system gravitates toward them. A signal that fires hundreds of times a day will exert more influence than a high-value action that fires only a handful of times per week.
This dynamic is especially visible in PMax. The system evaluates signals across channels, audiences, and placements, and pursues the cheapest, most abundant path to conversion. If a contact page visit or key pageview is treated as a Primary signal, PMax may prioritize it over a purchase or SQL because it’s easier to achieve at scale.
That’s why PMax often reports strong conversion volume and low CPA while revenue remains flat or declines. The system is performing as instructed, but the inputs lack a disciplined signal hierarchy. Value-based bidding improves structure, but without restraint in the number and type of signals, it can’t fully prevent the problem.
When low-value actions are tracked as Primary conversions, platform-reported performance becomes disconnected from business outcomes. Metrics such as CPA, ROAS, and conversion rate may improve, but those gains are often illusory.
For example:
A campaign may show a 40% reduction in CPA because the system is optimizing toward pageviews rather than purchases.
ROAS may increase because the system attributes inflated value to actions that don’t correlate with revenue.
Conversion volume may spike due to high-frequency micro-conversions.
These patterns create a false sense of success, leading advertisers to scale budgets prematurely and erode contribution margin.
Diluted intent and double-counting
When multiple micro-conversions are tracked as Primary, a single user journey can generate multiple wins for the algorithm.
For example, a user who views a product page, signs up for a newsletter, and adds an item to cart may be counted as three conversions from a single click. If values are assigned to each step, conversion value and ROAS become inflated as well.
This inflates conversion volume, inflates conversion value, and distorts bidding behavior. The system interprets this as a high-value user and begins overbidding on similar traffic, even if the user never completes a purchase.
In many accounts, micro-conversions outnumber real conversions by a ratio of 500 to 1 or more. This imbalance has significant implications for bidding behavior.
When frequency overwhelms value
If an account records 500 pageviews, 200 add-to-carts, 50 lead form starts, 10 purchases, and all actions are treated as Primary, the system receives 760 signals for every 10 that actually matter.
Without distinct values, the algorithm can’t differentiate between a $0.05 action and a $500 action. It optimizes toward the most frequent signals because they provide the clearest path to increasing conversion volume.
Even when values are assigned, overvaluing micro-conversions teaches the algorithm to pursue easy wins. The result is a maximized conversion value metric that looks strong in the dashboard but isn’t reflected in actual sales.
The consequences of signal imbalance
When micro-conversions dominate the signal mix:
Bidding shifts toward low-intent traffic because it produces more conversions.
Budgets are allocated inefficiently as the system chases cheap signals.
Real ROAS declines, even as platform-reported ROAS appears strong.
Scaling becomes risky because the system is optimizing toward the wrong outcomes.
That’s why accounts with high micro-conversion volume often show strong platform metrics but weak financial performance.
When micro‑conversions stop helping
Micro-conversions are useful when an account lacks enough real conversion volume to support stable bidding. However, once a campaign consistently reaches 30 to 60 real conversions per month, they no longer provide meaningful benefit.
At that point, the system has enough high-quality data to optimize effectively. Continuing to rely on micro-conversions introduces unnecessary noise and increases the risk of misaligned bidding.
This is the point to transition from tCPA to tROAS and let real revenue guide optimization.
Primary actions influence bidding, while Secondary actions provide visibility without affecting optimization. This four-part litmus test helps determine which actions should be treated as Primary.
1. The volume threshold
Micro-conversions should be used only when real conversion volume isn’t sufficient to support stable bidding. As a general guideline:
Below 30 real conversions per month: A high-intent micro-conversion may be needed to give the system enough data.
30 to 60 real conversions per month: Begin reducing reliance on micro-conversions.
60 or more real conversions per month: Remove micro-conversions from Primary status and rely on revenue-based optimization.
This threshold ensures micro-conversions serve as a temporary bridge, not a permanent crutch.
2. The necessary step test
A Primary action should represent a required step in the conversion journey, such as:
Add to cart.
Begin checkout.
Start lead form.
Actions that aren’t required steps — such as contact page visits, whitepaper downloads, or time on site — shouldn’t be treated as Primary. These may indicate interest, but they don’t reliably predict revenue.
3. The valuation test
If an action can’t be assigned a realistic financial value, it shouldn’t be used as a Primary conversion. Assigning arbitrary values introduces risk and can distort bidding behavior.
Actions such as time on site or scroll depth fail this test because they don’t consistently correlate with revenue. However, if CRM data shows a reliable statistical correlation with revenue, that can justify including the action.
4. The simplicity test
Even if multiple actions pass the first three tests, only the strongest one or two should be designated as Primary. Including too many Primary actions increases the risk of double-counting and overbidding.
A streamlined Primary set ensures the system focuses on the most meaningful signals.
Use Secondary conversions as a diagnostic tool
Secondary conversions provide visibility into user behavior without influencing bidding. They’re a useful diagnostic tool for understanding funnel performance and evaluating new signals.
Visibility without optimization risk
Tracking actions such as newsletter signups, video views, or soft leads as Secondary lets you monitor engagement without shifting bidding toward low-value behaviors.
This approach preserves data integrity while maintaining control over optimization.
Funnel analysis and bottleneck identification
Secondary conversions reveal where users drop off in the funnel. For example:
High Add-to-Cart volume but low purchase volume indicates checkout friction.
High MQL volume but low SQL volume suggests targeting or qualification issues.
These insights support more informed optimization decisions.
Safe testing environment
New signals should be tracked as Secondary for several weeks before being considered for Primary status. This allows you to evaluate frequency, correlation with revenue, stability, and predictive value.
Only signals that demonstrate consistent value should be promoted to Primary.
Assign micro-conversion values using a safety discount
When micro-conversions are used, they must be assigned values that reflect their true contribution to revenue. Overvaluing micro-conversions is a common cause of inflated platform performance and misaligned bidding.
Calculating baseline value
The baseline value of a micro-conversion is determined by:
Baseline value = Conversion rate to sale x Average order value (AOV) or profit
For example:
Ecommerce: If 25% of add-to-carts convert and AOV is $1,600, the baseline value is $400.
Lead generation: If 10% of demo requests convert to $5,000 profit, the baseline value is $500.
Applying the 25% safety discount
The baseline value shouldn’t be used directly. Instead, apply a 25% reduction:
$400 becomes $300.
$500 becomes $375.
This discount helps prevent overbidding by ensuring the system doesn’t overvalue micro-conversions relative to actual revenue.
Undervaluing is safer than overvaluing
Undervaluing micro-conversions may slightly slow learning, but it doesn’t distort bidding. Overvaluing them can push the system toward low-intent traffic, leading to rapid budget misallocation.
The safety discount provides a buffer that protects contribution margin while still supplying useful data.
Where PPC experts draw the line on micro-conversions
Practitioners consistently point to the same principle: signal discipline matters more than signal volume.
Julie Friedman Bacchini emphasizes that every conversion action becomes a signal the system optimizes toward. Using more than one Primary action introduces ambiguity — “it’s suddenly muddier” — and skipping values makes it easier for the system to latch onto lower-value signals. Values don’t need to be exact, but they must be relative.
She also notes that micro-conversions can help low-volume campaigns reach data thresholds, but they aren’t a substitute for real Primary conversions. Removing them later can mean “starting over to a large extent on system learning.”
Jordan Brunelle takes a similarly disciplined approach: “There can definitely be too many.” He recommends starting with one strong signal of intent and watching the ratio between micro-conversions and real outcomes. If volume is high but outcomes are low, it often signals a targeting or signal issue.
Signal discipline is the real competitive advantage
The debate around micro-conversions often focuses on quantity. But the real differentiator isn’t volume, but discipline.
Bidding systems optimize toward the signals they’re given. When the signal mix is cluttered, performance drifts. When it’s clear and intentional, the system aligns with real business outcomes.
Micro-conversions should be selectively used and continuously evaluated. Start with a simple audit:
Identify all Primary conversions.
If more than two or three actions are Primary, the account is likely over-signaled.
Apply the litmus test.
Remove any Primary actions that fail the volume, necessary step, valuation, or simplicity tests.
Move nonessential actions to Secondary.
Assign conservative values to remaining micro-conversions.
Use the safety discount to avoid overbidding.
Monitor performance for 30 days, focusing on revenue, contribution margin, and signal distribution.
Micro-conversions should be a temporary bridge. Once real conversion volume is sufficient, optimization should be guided by revenue. A disciplined signal architecture gives automation what it needs to perform as intended: efficient, predictable, and aligned with real business outcomes.
If you’re a lawyer, college administrator, or financial services provider, you’ve likely seen the frustrating “Eligible (Limited)” status in your Google Ads account. It can feel like you’re fighting Google with one hand tied behind your back when your remarketing lists, exact match keywords, and more don’t work as intended.
While it might feel like Google Ads is out to get you when you operate in a so-called “sensitive interest category,” there are specific reasons for these rules. More importantly, there are specific ways to succeed despite them.
This article will cover what the personalized advertising policies are, what they mean for your account, and five specific tactics you can use to succeed with Google Ads.
Why does Google have personalized advertising policies?
Google provides detailed explanations in its official policy documentation, but it comes down to two things: legal requirements and ethical standards.
In the United States, for example, the Fair Housing Act and employment laws prevent discrimination based on age, gender, or location. If you’re advertising a job opening or a new apartment complex, Google can’t allow you to exclude people based on those demographics because doing so would be against the law.
Then there’s the ethical side. Imagine you’re running a rehab center. If someone visits your site, Google’s “sensitive interest” policy prevents you from following them around the internet with targeted banner ads like, “Still struggling with addiction? Come to our clinic.”
That kind of remarketing is intrusive and, frankly, predatory when it targets someone’s health and struggles. To protect the user experience and maintain a sense of privacy, Google limits how personal data can be used in these high-stakes industries.
What can’t you do in a sensitive interest category?
If you fall into one of these categories — housing, employment, credit, healthcare, or legal services — the biggest impact is usually on your audience targeting.
Here’s what you can’t use:
Website or App Remarketing Lists, including the Google-engaged audience: You can’t target people who have previously visited your website or used your app.
Customer Match: You can’t upload your own email lists or phone numbers to target existing clients.
YouTube Audiences: You can’t target people based on how they’ve interacted with your videos.
Custom Segments: You aren’t allowed to build specialized audiences based on specific search terms or types of websites people visit
For certain categories in certain countries, like housing, credit, and employment in the United States, there’s further “demographic stripping” — you can’t target by age, gender, parental status, or ZIP code. Your Smart Bidding strategies won’t use these signals as inputs either.
The good news: What can you do in a sensitive interest category?
It’s easy to focus on what’s gone, but what still works is a much longer list. Even in a restricted industry, you still have access to the core engine of Google Ads. You can still use:
Keywords, feeds, and keywordless technology: These rely on intent (queries) rather than identity, so they are perfectly fine in Search, Shopping, and Performance Max.
Google’s audiences: Affinities, In-Market, Detailed demographics, and Life Events segments are still fully at your disposal, where eligible, in Demand Gen, Display, Video, Search, and Shopping.
Optimized targeting: Google’s AI can still find people likely to convert based on your historical converters, in Demand Gen, Display, and Performance Max.
Content Targeting: You can choose to show your ads on specific keywords, topics, and placements in Display and Video campaigns.
Conversion tracking: Yes, you can still track conversions and use features like Enhanced Conversions, Offline Conversion Import, and Consent Mode. While your internal legal team may have reservations or restrictions around your website tracking, Google’s Personalized advertising policy doesn’t restrict any conversion tracking.
5 strategies to win in sensitive categories
If you want to move the needle without relying on remarketing, you need to rethink your account structure and messaging. Here are five things you can do right now.
1. The “Separate Domain” strategy
If your business offers a mix of services — some sensitive, some not — don’t let the sensitive ones “poison” your whole account. Think of a spa that offers haircuts, pedicures, and Botox. Haircuts are fine; Botox is a medical procedure that triggers sensitive category restrictions.
If you put them all on one site, your entire remarketing capability might get shut down. Consider putting the sensitive service on a separate domain and a separate Google Ads account. This lets you use every available tool for your main business while the sensitive portion operates under the necessary restrictions.
2. Choose Demand Gen over Display
If you want to use image or video ads, use Demand Gen instead of the standard Display Network. In my experience, Demand Gen delivers higher-quality audiences and tends to perform better in restricted niches.
3. Lean into phrase and broad Match
You might be tempted to stick to Exact Match keywords to keep things tight. However, in sensitive categories, Google may restrict ads on very narrow, specific queries for privacy reasons. If your Exact Match keywords aren’t getting impressions, try Phrase or Broad Match. This gives the algorithm more room to find users searching for the same thing with slightly different phrasing that may be less restricted.
Think of it like fishing: if you can’t use a spear, use a net. You’ll catch some fish you don’t want, but that tradeoff helps you catch the ones you do want more easily.
4. Feed the AI with offline conversion tracking
Most businesses in these categories, such as law firms or banks, don’t make sales on their websites. The website generates a lead, and the sale happens over the phone or in an office.
If you want Google to find better users, you must feed that real-world data back into the system. Use Offline Conversion Tracking (OCT) to show Google which leads became customers. Even if you must navigate HIPAA or other privacy regulations, there are ways to do this safely.
Consult your legal team, but don’t skip this step. It’s the best way to train the algorithm when you can’t use your own audiences and to ensure Smart Bidding works at its full potential.
5. Creative-Led targeting
When you can’t tell Google who to target with a list, you have to tell the user who the ad is for through your creative. Your headlines and images should qualify the lead.
Be specific in your copy. For example, instead of “Need a Lawyer?” try “Defense Attorney for Small Business.” This attracts your target audience and encourages people who aren’t a fit to scroll past, saving you money and improving your conversion rate.
Running Google Ads in a sensitive category is a challenge, but it’s far from impossible. By shifting your focus from who the person is to what they’re looking for and how you speak to them, you can still drive incredible results.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.
AI has changed how I work after nearly two decades in digital marketing. The shift has been meaningful, freeing up time, reducing the grinding parts of the job, and making some genuinely hard tasks faster.
That doesn’t mean it does the work for you, transforms everything overnight, or saves you 40 hours a week. In real-world SEO, with real clients and real deadlines, it’s a tool that makes parts of the job easier, not something that replaces the work itself.
Here are 20 ways I actually use it. Some are specific to SEO. Some are broader, but relevant to anyone working in the industry. All of them are practical, tested, and honest about their limitations.
Content creation and copywriting
1. Writing first drafts
The single best way to use AI for content is to stop expecting it to produce something publishable and start treating it as a very fast first-draft machine.
Feed it your brief, your target keyword, your audience, and your angle. Get a structure back.
Then rewrite it in your voice. Add in the expertise that only you know, not a vanilla version of what’s online.
The content AI produces out of the box is average. Your job is to make it good. Reference real-life stories, case studies, and statistics, and showcase your personal viewpoint and expertise.
The time savings are in not starting from a blank page.
2. Generating meta title and description variations
Give Claude or ChatGPT your target keyword, page topic, and character limits. Ask for 10 variations of your meta title and descriptions. You’ll use one, maybe combine two, but the process takes two minutes instead of 20. For large sites with hundreds of pages, this alone is worth the subscription.
Many tools allow you to upload CSV files, add AI’s suggested ideas, and download them for review. Don’t skip this step. A human eye is where the value sits
3. Refreshing underperforming content
Paste an existing page or blog post that has dropped in rankings. Ask AI to identify what’s missing, what could be expanded, and what feels outdated.
It won’t always be right, but it gives you a starting point instead of reading the whole thing yourself with fresh eyes you don’t have at 4 p.m. on a Thursday.
Make sure to give context. Long prompts with lots of detail will produce much better results than pasting a page in cold.
4. Generating FAQ sections
Prompt AI to generate the 10 most common questions for your target keyword. Cross-reference with People Also Ask and your own research.
Answer them, and you now have an FAQ section, featured snippet opportunities, and a content gap analysis in about 10 minutes.
5. Writing alt text at scale
Nobody enjoys writing alt text for 200 product images. Describe the image, give it the context of the page it sits on, and include the target keyword. Then ask for alt text that’s descriptive and naturally includes the term where relevant. It’s not glamorous, but it’s necessary and faster.
You can also run a website through Screaming Frog, export it to a CSV file, upload it to your AI of choice, and ask it to write the alt text. This only works well if the file names are descriptive, and again, a human eye is key. This is about increasing speed, rather than handing it over to AI completely.
Not everyone working in SEO has a developer background. AI is useful for:
Translating technical error messages.
Explaining what a server log is telling you.
Helping you understand why a page is excluded from indexing.
Paste in the output, ask it to explain it in plain English, and then ask what the fix should be. Verify the answer, but it gets you most of the way there.
7. Writing schema markup
Schema is one of those things everyone knows they should be doing more of, and nobody finds especially enjoyable.
Describe the content of your page to your AI of choice, tell it what schema type is relevant (FAQ, Article, LocalBusiness, Product, etc.), and ask it to generate the JSON-LD.
Check it in Google’s Rich Results Test before implementation. This used to take me 20 minutes per page type. Now it takes five.
8. Creating regex for Google Search Console
If you use regex in GSC filters and you’re not a developer, AI is your new best friend. Describe what you’re trying to filter, for example, all URLs containing a specific subfolder, or all queries including a particular term, and ask for the regex string.
It gets it right more often than not, and you can ask it to explain the logic so you actually understand what you’re implementing.
9. Analyzing crawl data with prompts
If you export a crawl from Screaming Frog or Sitebulb and you’re not sure what to prioritize, paste the summary data into your AI tool and ask it to help you identify the highest-priority issues based on the site’s goals.
It won’t replace your expertise, but it’s a useful sounding board when you’re staring at a spreadsheet with 47 issues and a client call in an hour.
This is one of the most underrated uses of AI in SEO work. You have the data. You have the graphs. What takes time is writing the commentary that explains what happened, why, and what comes next.
Feed AI your key metrics and the context of what was happening that month (algorithm updates, campaign launches, seasonality), and ask it to draft the narrative section of your report. Edit it, add your actual insight, but stop writing it from scratch every month.
You can even upload reports from various data sources and ask it to combine and summarize them. This saves me hours every month when I’m putting together reports.
11. Summarizing long reports for clients
Not every client wants to read a 12-page report. Ask AI to summarize your report into a five-bullet executive summary. Give it to clients at the top of the document.
The ones who want details will read on. The ones who don’t will feel informed without asking you to talk them through every chart on the next call.
Ask AI to create the executive summary for someone who doesn’t know anything about SEO, and it’ll give you something simple and easy to understand.
12. Identifying anomalies in data
Paste a table of your keyword rankings or traffic data, and ask AI to flag anything that looks unusual, including significant drops, unexpected gains, or patterns that don’t match the previous period.
It won’t replace proper analysis, but it’s a useful first pass when you’re managing a large amount of information and can’t give every dataset the attention it deserves.
List your top three competitors and your own site. Ask AI to help you think through what content topics they’re likely covering that you’re not, based on their positioning and audience.
Then, validate that with actual keyword research tools. AI can’t see competitor data directly, but it’s useful for hypothesis generation before you do the manual work.
14. Understanding a new industry quickly
When you take on a client in an industry you don’t know well, you need to get up to speed fast. Ask your AI to give you a primer on the industry:
Key terminology.
The main players.
The buying cycle.
How people typically search for solutions in this space.
What the common pain points are.
It saves you an embarrassing amount of time in discovery calls.
15. Identifying search intent mismatches
Paste a list of your target keywords and ask AI to categorize them by search intent: informational, navigational, commercial, and transactional. Then compare that against the page type you’re targeting them with.
You’ll almost certainly find mismatches. This is a task that’s straightforward to describe, but tedious to do manually across hundreds of keywords.
Everyone has had to write a difficult email, whether it’s explaining why rankings have dropped, why a deadline was missed, or why they need to do something you know they don’t want to do.
These emails take a disproportionate amount of emotional energy to write. Give your AI the situation, the context, and what you need the client to understand or do, and ask for a draft that’s clear, professional, and honest.
Edit it. Send it. Move on.
17. Writing SOPs and process documentation
If you’ve been meaning to document your processes and just haven’t gotten around to it, AI removes the excuse.
Describe a process out loud (or in rough notes), paste it in, and ask for a structured SOP with numbered steps, decision points, and notes.
The first version will need editing, but having a framework to work from is the difference between getting it done and it sitting on the to-do list for another quarter.
18. Preparing for client calls
Before a client call, paste in your recent report data, any issues from the previous month, and what you need to cover.
Ask your AI to help you structure the agenda and anticipate questions the client might ask based on the data. You’ll go into the call more prepared and less likely to be caught off guard.
Productivity and admin
19. Processing your own thinking
This one sounds vague, but it’s one of the ways I use AI most.
When I have a problem I can’t get clear on, a strategy decision I’m going back and forth on, or a piece of work I can’t find the right angle for, I talk it through with Claude (my AI buddy of choice) to clarify my own thinking. It asks questions, reflects things back, and helps me arrive at a point of view faster than I would staring at a blank document.
Ask your AI to be brutally honest with you. Otherwise, it’ll just keep agreeing with you and telling you that you’re truly an expert on every topic.
20. Building prompts you actually reuse
The biggest productivity gain from AI isn’t any individual use. It’s building a library of prompts that work for your specific workflow and reusing them consistently.
Every time you get a good result from an AI tool, save the prompt. Over time, you build a system, rather than starting from scratch every time. This is the thing most people skip, and it’s the thing that compounds.
Top tip: In the paid version of many AI tools, you can create projects and have specific instructions for each one. This is invaluable for saving time by not having to include all of this information in every prompt you use.
None of these tips replace the expertise, judgment, and client relationships that make a good SEO professional.
AI doesn’t know the business the way you do. It doesn’t understand the nuance of an industry, the history of an account, or the particular quirks of a contact you deal with regularly.
AI reduces the time spent on tasks that don’t require that expertise, so you have more of it available for the work that does.
Use AI as a tool. Stay skeptical of the hype. And for the love of good search results, edit everything before it goes anywhere near a client.
Barry Adams recently published “Google Zero is a Lie” in his SEO for Google News newsletter, arguing that the narrative of Google traffic disappearing is false and dangerous.
His data backs it up. Similarweb and Graphite data show only a 2.5% decline in Google traffic to top websites globally. Google still accounts for nearly 20% of all web visits.
The widely cited Chartbeat figure showing a 33% decline? It’s skewed by a handful of large publishers hit by algorithm updates. Publishers who abandon SEO in the face of this panic are making a self-fulfilling prophecy, ceding traffic to competitors who keep optimizing.
He’s right. And he’s looking at the wrong problem.
Humans are still clicking Google results. What has changed is that a growing share of your visitors isn’t human at all.
That number includes everything from scrapers to brute-force login bots. But the fastest-growing segment is AI crawlers.
AI crawlers now represent 51.69% of all crawler traffic, surpassing traditional search engine crawlers at 34.46%, Cloudflare’s 2025 Year in Review found. AI bot crawling grew more than 15x year over year. Cloudflare observed roughly 50 billion AI crawler requests per day by late 2025.
Akamai’s data tells a similar story: AI bot activity surged 300% over the past year, with OpenAI alone accounting for 42.4% of all AI bot requests.
So while Adams is correct that human Google traffic hasn’t collapsed, something else is happening on the other side of the server logs.
Anthropic’s ClaudeBot crawls 23,951 pages for every single referral it sends back to a website. OpenAI’s GPTBot: 1,276 to 1. Training now drives nearly 80% of all AI bot activity, up from 72% the year before.
Compare that to traditional Googlebot, which has always operated on a crawl-and-send-traffic-back model. Google crawls your site, indexes it, and sends 831x more visitors than AI systems. The deal was simple: let me read your content, and I’ll send you people who want it.
Google’s newer AI Mode is worse. Semrush data shows a 93% zero-click rate in those sessions. AI Overviews now trigger on roughly 25-48% of U.S. searches, depending on the dataset, and that number keeps climbing.
And when Google’s AI features do cite sources, they’re increasingly citing themselves. Google.com is the No. 1 cited source in 19 of 20 niches, accounting for 17.42% of all citations, an SE Ranking study of over 1.3 million AI Mode citations found. That tripled from 5.7% in June 2025. Add YouTube and other Google properties, and they make up roughly 20% of all AI Mode sources.
So the old deal is being rewritten even by Google. AI crawlers from other companies skip the pretense entirely: let me read your content so I can answer questions about it without ever sending anyone your way.
The agentic shift
The bot traffic numbers are already here. The next wave is bigger: AI agents acting on behalf of humans.
In 2024, Gartner predicted that traditional search engine traffic would drop 25% by 2026 as AI chatbots and agents handle queries. That prediction is tracking. Its October 2025 strategic predictions go further: 90% of B2B buying will be AI-agent intermediated by 2028, pushing over $15 trillion in B2B spend through AI agent exchanges.
This isn’t theoretical.
Salesforce reported that AI agents influenced 20% of all global orders during Cyber Week 2025, driving $67 billion in sales.
Retailers with AI agents saw 13% sales growth compared to 2% for those without.
Gartner says 40% of enterprise applications will have task-specific AI agents by the end of 2026, up from less than 5% in 2025. eMarketer projects AI platforms will drive $20.9 billion in retail spending in 2026, nearly 4x 2025 figures.
Think about what that looks like in practice. An AI agent researches vendors for a procurement team. It doesn’t see your hero banner. It doesn’t notice your trust badges. It reads your structured data, compares your specs to those of three competitors, and builds a shortlist.
That “visit” might show up in your analytics as a bot hit with a zero-second session duration. Or it might not show up at all.
So what do you optimize for when the visitor is a machine making decisions for a human?
It’s not the same as traditional SEO. And it’s not the same as the AI Overviews optimization most people are focused on right now. AI Overviews are still Google. Still one search engine, still largely the same ranking infrastructure, still (mostly) one answer format.
Agentic SEO is about being useful to software that’s pulling from search APIs, crawling directly, and using LLM reasoning to make recommendations. That software doesn’t care about your page layout. It cares about whether it can extract what it needs.
I think a few things start to matter a lot more.
Structured data becomes load-bearing
Schema markup has always been a “nice to have” for rich snippets. When an AI agent compares your product to three competitors, structured data lets it read your specs without having to guess. Think product schema, FAQ schema, and pricing tables in clean HTML. These go from SEO hygiene to core infrastructure.
AI agents don’t search for “best CRM for small business.” They ask compound questions: “Which CRM under $50/user/month integrates with QuickBooks and has a mobile app with offline capability?” If your content only answers the first version, you’re invisible to the second.
Freshness and accuracy get audited differently
A human might not notice your pricing page is 8 months stale. An AI agent cross-referencing your pricing against competitors will flag the discrepancy. Or worse, use the outdated number in its recommendation and cost you the deal.
Blocking AI crawlers feels protective, but it means AI agents can’t recommend you. Allowing them means your content trains models that may never send you traffic. There’s no clean answer.
But pretending it’s just a technical setting is a mistake. New IETF standards are emerging to give publishers more granular control, but they’re not widely adopted yet.
Most analytics setups can’t tell the difference between a human visit, a bot crawl, and an AI agent evaluating your site on someone’s behalf. GA4 filters most bot traffic. Server logs show the raw picture, but take work to parse. Even then, figuring out whether an AI agent’s visit led to an actual sale is basically impossible right now.
This is where the “Google Zero” framing does real damage.
If you’re only measuring organic sessions from Google, you’re blind to a channel that doesn’t show up in that number. Your traffic could look stable while an AI agent steers $50,000 in annual spend to your competitor because their product schema was more complete.
I don’t think we have good measurement for this yet. Nobody does. But ignoring the problem because Google sessions look fine is like checking your print ad response rate in 2005 and deciding the web wasn’t worth paying attention to.
I don’t have a playbook for this. It’s too new. But I can tell you what we’re doing at our agency.
Audit your structured data like it’s your storefront: Evaluate whether your website’s schema is present and well-formed. Look into structured data, content structure, and technical health. Make sure product, service, FAQ, and organization markup is complete, accurate, and current. This is table stakes.
Answer compound questions: Look at your top landing pages. Do they answer the specific, multi-variable questions an AI agent would ask? Or just the broad keyword query a human would type?
Check your server logs: Look for GPTBot, ClaudeBot, PerplexityBot, and other AI user agents. Understand how much of your traffic is already non-human. If you’re on Cloudflare, their bot analytics dashboard makes this easy without parsing raw logs. You’ll probably be surprised either way.
Make a conscious robots.txt decision: Understand the trade-offs, and make it a business decision with your leadership team.
Start tracking AI citations: Tools like Semrush, Scrunch, DataForSEO, and others can show when AI platforms mention your brand. The data is directional, not precise. But it’s better than nothing.
Don’t abandon Google SEO: Adams is right that Google traffic is still massive and still valuable. The agentic web doesn’t replace Google. It adds a new layer. You need both.
The real question
The “Google Zero” argument pits one extreme against another, even as the actual shift is quieter and more important.
The web is becoming a place where the majority of visitors are machines. Some send traffic back. Most don’t. Some of them make purchasing decisions on behalf of humans. That number is growing fast.
The SEOs who do well here won’t be the ones arguing about whether Google traffic moved 2.5%. They’ll be the ones who figured out how to be useful to both human visitors and the AI agents acting on their behalf.
We’ve spent 25 years optimizing for how humans find things. Now we need to figure out how machines find things for humans.
That’s not Google Zero. We don’t have a name for it yet. But it’s already here.
If you want to go deeper on GEO and agentic SEO, I’m teaching an SMX Master Class on Generative Engine Optimization on April 14. It covers structured data implementation, AI visibility measurement, content optimization for AI systems, and the practical side of everything in this article.
LinkedIn is one of the most powerful platforms for recruiting top-tier talent. It’s also one of the easiest places to waste budget if campaigns aren’t structured correctly.
Many recruitment campaigns fail because they prioritize visibility over intent. More impressions don’t equal better hires. Broad targeting and generic messaging often lead to an influx of unqualified applicants, driving up cost-per-hire and slowing down hiring timelines.
The most effective LinkedIn recruitment strategies focus on one thing: attracting and converting high-intent candidates while filtering out poor-fit applicants before they ever click. Let’s break down exactly how to do that.
Shift your strategy: Optimize for intent vs. reach
The biggest mistake advertisers make on LinkedIn is targeting based solely on job titles, industries, and years of experience.
While this may generate volume, it rarely produces efficiency. Instead, high-performing campaigns are built around intent-based targeting — reaching candidates who are qualified and more likely to consider a new opportunity.
This requires a layered approach:
Core fit: Job titles, skills, and certifications.
Behavioral signals: Open-to-work status, group memberships, and engagement with industry content.
Career friction indicators: Burnout-prone roles, companies experiencing layoffs, and limited growth environments.
By combining these layers, you move beyond “who they are” and begin targeting why they might be ready to make a change — which is where real performance gains happen.
Your ad creative isn’t just there to attract attention. It should actively filter your audience. One of the most effective ways to control cost-per-hire is to discourage unqualified candidates from clicking in the first place.
Strong recruitment ads follow a structured approach:
Call out a specific pain point or identity: “Burned out from long shifts in healthcare?”
Clearly define who the role is for: “This role is designed for licensed RNs with 3+ years of experience.”
Highlight meaningful value: Think flexibility, compensation, career growth, or mission.
Set expectations upfront: “Not an entry-level position” or “Requires managing enterprise accounts.”
This combination of attraction and exclusion ensures that the candidates who do click on your ads are far more likely to convert.
Messaging: Career upgrades, better lifestyle, growth opportunities.
Outcome: Scalable pipeline of qualified candidates.
Cold passive talent (top funnel)
These are long-term potential candidates to start building your pipeline, with the intent to move them to the middle of the funnel and eventually the bottom of the funnel.
Target: Broader audiences and lookalikes.
Messaging: Employer brand, culture, “day in the life.”
Outcome: Reduces future acquisition costs over time.
Control costs through smarter bidding and optimization
LinkedIn’s ad platform can quickly become expensive without proper controls. Start with manual CPC bidding to maintain control, then test automated delivery once performance data is established.
More importantly, optimize for the right metrics. Focus on qualified applications instead of clicks. Track downstream actions, such as interview and hire rates.
Be prepared to make fast decisions. Ads with high click-through rates but low application rates often indicate poor alignment. Ads that generate many applications, but few interviews signal weak pre-qualification.
Efficiency comes from eliminating wasted spend earlier, rather than later. It conserves ad spend and minimizes overlapping audiences and hitting the wrong targets.
Improve conversion rates with a two-step application process
A common but costly mistake is sending candidates directly to long, complex application forms. Instead, use a two-step funnel:
Pre-qualification landing page.
Role overview and expectations.
Compensation transparency.
Clear “who this is (and isn’t) for.”
Application.
Short form or LinkedIn Easy Apply.
This approach sets expectations, filters candidates, and significantly improves application quality — often reducing cost-per-hire by 30-50%.
Use retargeting to capture missed opportunities
Not every qualified candidate applies on the first interaction. Retargeting allows you to re-engage high-intent users who have already shown interest.
Build audiences from:
Career page visitors.
Job post viewers.
Video viewers (50%+ engagement).
Then serve follow-up messaging such as:
“Still considering a move?”
“Last chance to apply”
Employee testimonials or success stories.
Retargeting campaigns are often the most cost-efficient part of your entire strategy.
Advanced strategies to increase ROI
Once the fundamentals are in place, there are several advanced tactics that can further improve performance:
Competitor targeting: Target employees at competing companies and position your opportunity as a clear upgrade — whether through compensation, flexibility, or culture.
Skill-based campaign segmentation: Instead of grouping all candidates together, build campaigns around specific skills or certifications. This reduces competition in the ad auction and often lowers cost-per-click.
Selective use of Message Ads: Message ads can be effective for senior or hard-to-fill roles — but only when targeting is highly refined. Otherwise, they can quickly become cost-prohibitive.
Here’s an example of a successful LinkedIn InMail message that recently drove over 70% high-intent applications for an HVAC sales client:
Message body:
Hi [First Name],
This might be a stretch — but your background in HVAC sales caught my attention.
We’re hiring experienced sales reps who are tired of unpredictable commissions and weekend-heavy schedules.
This role is built for reps who:
Have 3+ years in HVAC or home services sales
Are comfortable running in-home consultations
Want a more stable, high-earning structure
What’s different:
No weekend appointments
Pre-qualified, inbound leads (no cold knocking)
Six-figure earning potential with consistency
That said, this isn’t a fit for entry-level reps or those new to sales.
If you’d be open to a quick 10-minute conversation to see if it’s worth exploring, I’m happy to share more.
If not, no worries at all — appreciate you taking a look.
— [Name]
Stating upfront the need for “experienced sales reps” immediately establishes relevance and increases response rates while reducing irrelevant replies.
Focusing on what matters to potential candidates, such as no weekend appointments and compensation structure, speaks to the audience’s needs versus the company’s.
Closing the conversation with the reminder that this isn’t an entry-level position weeds out wasted conversations and reduces cost-per-hire.
The most effective LinkedIn recruitment campaigns rely on better strategy.
When you focus on intent-based targeting, pre-qualification within ad creative, funnel segmentation, and conversion optimization, you create a system that attracts the right candidates while minimizing wasted spend.
Ultimately, reducing cost-per-hire is about reaching the right people, at the right time, with the right message.
YouTube used its NewFront presentation to unveil a significant upgrade to its Creator Partnerships platform, adding Gemini-powered creator matching, stronger measurement tools, and new ways to run creator content as paid ads.
Why we care. Influencer marketing has become a core part of many brands’ strategies, but finding the right creators at scale and proving ROI is a pain point. tackles influencer marketing’s two biggest friction points — finding the right creator and proving ROI.
Gemini-powered matching cuts through the noise of three million creators, while the ability to run creator content as paid Shorts and in-stream ads makes performance measurable like any standard campaign, backed by a reported 30% conversion lift.
How it works. The updated platform uses Gemini to recommend creators from a pool of more than three million YouTube Partner Program members, filtered by campaign goals. Advertisers get more control over who they work with and better visibility into how those partnerships perform.
The big new feature. A revamped Creator Partnerships boost lets brands run creator-made content directly as Shorts and in-stream ads — formats YouTube says deliver an average 30% lift in conversions.
The big picture. The announcement builds on BrandConnect, YouTube’s existing creator monetization infrastructure, showing that the platform is doubling down on the creator economy as a growth lever for advertisers — not just a content strategy.
Reddit ranks as the most-cited domain in AI-generated answers, followed by YouTube and LinkedIn, based on a new analysis of 30 million sources by Peec AI, an AI search analytics tool.
The findings. Reddit was the most-cited source across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews. YouTube, LinkedIn, Wikipedia, and Forbes also ranked in the top five. Review platforms like Yelp and G2 appeared often in recommendation queries.
The research showed which domains models rely on:
ChatGPT favored Wikipedia, Reddit, and editorial sites like Forbes.
Google leaned toward platforms like Facebook and Yelp.
Perplexity emphasized Reddit, LinkedIn, and G2 for B2B queries.
Why we care. To win in AI search, you need authority beyond your site. Brands that appear consistently across trusted third-party platforms are more likely to be cited.
Why these sources? AI systems prioritize perceived authority plus authentic user input:
Reddit leads because it captures real user discussions.
YouTube dominates video citations via transcripts and descriptions.
Wikipedia serves as both a live source and a training dataset.
About the data. The analysis covered 30 million sources across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews, measuring domains directly cited in answers to isolate what shapes responses.
A newly published, unverified report claims Google’s Gemini AI is instructed to mirror user tone and validate emotions while grounding its responses in fact and reality.
Why we care. If accurate, AI-generated search responses may vary based on how a query is phrased — not just the information available.
What’s new. The report centers on the inherent tension in the system-level instructions guiding how Gemini responds. The report, published by Elie Berreby, head of SEO and AI search at Adorama, suggested that Gemini is instructed to:
Match the user’s tone, energy, and intent.
Validate emotions before responding.
Deliver answers aligned with the user’s perspective.
What it means. The “overly supportive mandate frequently overrides the factual grounding,” Berreby wrote. So instead of acting as a neutral aggregator, AI answers may:
Reinforce negative framing (“Why is X bad?”).
Reinforce positive framing (“Why is X great?”).
If public perception is negative, AI may amplify it. As the report suggests:
AI reflects existing sentiment signals.
It doesn’t “balance” them the way blue links often do.
Query framing. The emotional framing of a query affects:
Which sources get cited.
How summaries are written.
The overall tone of the answer.
Google’s AI Overviews already show tone shifts, often aligning with query intent beyond keywords. This report offers a possible explanation.
Unverified. Google hasn’t confirmed the leak. As Berreby noted in his report: “I’ve decided to share only a fraction of the leaked internal system information with the general public. I’m not sharing any sensitive data. This isn’t a zero-day exploit. This is a tiny leak.”
Google is giving retailers more firepower to promote loyalty program benefits directly within product listings — expanding the program internationally and into its newest AI-powered shopping experiences.
What’s new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads — making it easier to promote in-store or geography-specific perks.
Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery — rather than requiring a separate loyalty app or webpage — makes programs more visible and more likely to drive sign-ups.
By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.
The big picture. Loyalty benefits will now appear on Google’s AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.
Where it’s available. The expansion covers 14 countries — Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.
How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.
Don’t miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings — potentially expanding loyalty reach without additional ad spend.
Googlebot. Google has many more than one singular crawler, it has many crawlers for many purposes. So referencing Googlebot as a singular crawler, might not be super accurate anymore. Google documented many of its crawlers and user agents over here.
Limits. Recently, Google spoke about its crawling limits. Now, Gary Illyes dug into it more. He said:
Googlebot currently fetches up to 2MB for any individual URL (excluding PDFs).
This means it crawls only the first 2MB of a resource, including the HTTP header.
For PDF files, the limit is 64MB.
Image and video crawlers typically have a wide range of threshold values, and it largely depends on the product that they’re fetching for.
For any other crawlers that don’t specify a limit, the default is 15MB regardless of content type.
Then what happens when Google crawls?
Partial fetching: If your HTML file is larger than 2MB, Googlebot doesn’t reject the page. Instead, it stops the fetch exactly at the 2MB cutoff. Note that the limit includes HTTP request headers.
Processing the cutoff: That downloaded portion (the first 2MB of bytes) is passed along to our indexing systems and the Web Rendering Service (WRS) as if it were the complete file.
The unseen bytes: Any bytes that exist after that 2MB threshold are entirely ignored. They aren’t fetched, they aren’t rendered, and they aren’t indexed.
Bringing in resources: Every referenced resource in the HTML (excluding media, fonts, and a few exotic files) will be fetched by WRS with Googlebot like the parent HTML. They have their own, separate, per-URL byte counter and don’t count towards the size of the parent page.
How Google renders these bytes. When the crawler accesses these bytes, it then passes it over to WRS, the web rendering service. “The WRS processes JavaScript and executes client-side code similar to a modern browser to understand the final visual and textual state of the page. Rendering pulls in and executes JavaScript and CSS files, and processes XHR requests to better understand the page’s textual content and structure (it doesn’t request images or videos). For each requested resource, the 2MB limit also applies,” Google explained.
Best practices. Google listed these best practices:
Keep your HTML lean: Move heavy CSS and JavaScript to external files. While the initial HTML document is capped at 2MB, external scripts, and stylesheets are fetched separately (subject to their own limits).
Order matters: Place your most critical elements — like meta tags, <title> elements, <link> elements, canonicals, and essential structured data — higher up in the HTML document. This ensures they are unlikely to be found below the cutoff.
Monitor your server logs: Keep an eye on your server response times. If your server is struggling to serve bytes, our fetchers will automatically back off to avoid overloading your infrastructure, which will drop your crawl frequency.
Podcast. Google also had a podcast on the topic, here it is:
SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.
Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level — owning strategy across search, AI assistants, and paid channels, with clear revenue impact.
What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.
Companies are shifting budget toward strategy as AI tools absorb more execution work.
The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:
Project management appeared in more than 30% of listings.
Communication led non-senior roles at 39.4%.
Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
Technical SEO appeared in about 6% of listings.
Tools and channels. The SEO tech stack now spans analytics, paid media, and data.
Google Analytics appeared in up to 47.7% of listings.
Google Ads appeared in 29% of listings.
SQL demand grew at the senior level.
AI tools like ChatGPT were increasingly listed.
AI expectations: AI literacy is moving from optional to expected:
31% of senior roles mentioned AI.
Nearly 10% referenced LLM familiarity.
AI search concepts like AI search and AEO appeared more often.
Pay and positioning: SEO is increasingly treated as a business function.
The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
Degree preferences skewed toward business and marketing.
Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.
About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.
Technical SEO extends beyond indexing to how content is discovered and used, especially as AI systems generate answers instead of listing pages.
For generative engine optimization (GEO), the underlying tools and frameworks remain largely the same, but how you implement them determines whether your content gets surfaced — or overlooked.
That means focusing on how AI agents access your site, how content is structured for extraction, and how reliably it can be interpreted and reused in generated responses.
Agentic access control: Managing the bot frontier
From a technical standpoint, robots.txt is a tool you already use in your SEO arsenal. You need to add the right crawlers within your files to allow specific bots their own rights.
For example, you may want a training model like GPTBot to have access to your /public/ folder, but not your /private/ folder, and would need to do something like this:
You’ll also need to decide between model training and real-time search and citations. You might consider disallowing GPTBot and allowing OAI-SearchBot.
Within your robots.txt, you also need to consider Perplexity and Claude standards, which are tied to these bots:
Claude
ClaudeBot (Training)
Claude-User (Retrieval/Search)
Claude-SearchBot
Perplexity
PerplexityBot (Crawler)
Perplexity-User (Searcher)
Adding to your agentic access is another new protocol — llms.txt, a markdown-based standard that provides a structured way for AI agents to access and understand your content.
While it’s not integrated into every agent’s algorithm or design, it’s a protocol worth paying attention to. For example, Perplexity offers an llms.txt that you can follow here. You’ll come across two flavors of llms.txt:
llms.txt: A concise map of links.
llms-full.txt: An aggregate of text content that makes it so that agents don’t have to crawl your entire site.
Even if Google and other AI tools aren’t reading llms.txt, it’s worth adapting for future use. You can read John Mueller’s reply about it below:
Extractability: Making content ‘fragment-ready’
GEO focuses more on chunks of information, or fragments, to provide precise answers. Bloat is a problem with extractability, which means AI retrieval has issues with:
JavaScript execution.
Keyword-optimized content rather than entity-optimized content.
Weak content structures that fail to provide clear, concise answers.
You want your core content visible to users, bots, and agents. Achieving this goal is easier when you use semantic HTML, such as:
<article>
<section>
<aside>
The goal? Separate core facts from boilerplate content so your site shows up in answer blocks. Keep your context window lean so AI agents can read your pages without truncation. Creating content fragments will feed both search engines and agentic bots.
Structured data: The knowledge graph connective tissue
Schema.org has been a go-to for rich snippets, but it’s also evolving into a way to connect your entities online. What do I mean by this? In 2026, you can (and should) consider making these schemas a priority:
Organization and sameAs: A way to link your site to verified entities about you, such as Wikipedia, LinkedIn, or Crunchbase.
FAQPage and HowTo: Sections of low-hanging fruit in your content, such as your FAQs or how-to content.
SignificantLink: A directive that tells agents, “Hey, this is an authoritative pillar of information.”
Connecting information and data for agents makes it easier for your site or business to be presented on these platforms. Once you have the basics down, you can then focus on performance and freshness.
AI is constantly scouring the internet to maintain a fresh dataset. If the information goes stale, the platform becomes less valuable to users, which is why retrieval-augmented generation (RAG) must become a focal point for you.
RAG allows AI models, like ChatGPT, to inject external context into a response through a prompt at runtime. You want your site to be part of an AI’s live search, which means following the recommendations from the previous sections. Additionally, focus on factors such as page speed, server response time, and errors.
In addition to RAG, add “last updated” signals for your content. <time datetime=””> is one way to achieve this, along with schema headers, which are critical components for:
News queries.
Technical queries.
You can now start measuring your success through audits to see how your efforts are translating into real results for your clients.
You have everything in place and ready to go, but without audits, there’s no way to benchmark your success. A few audit areas to focus on are:
Citation share: Rankings still exist, but it’s time to focus on mentions as well. You can do this manually, but for larger sites you’ll want to use tools like Semrush.
Log file analysis: Are agents hitting your site? If so, which agents are where? You can do this through log analysis and even use AI to help parse all of the data for you.
The zero-click referral: Custom tracking parameters can help you identify traffic origins and “read more” links, but they only paint part of the picture. You also need to be aware that agents may append your parameters, which can impact your true referral figures.
Measuring success shows you the validity of your efforts and ensures you have KPIs you can share with clients or management.
Scaling GEO into 2027
Preparing your GEO strategy for 2027 requires changes in how you approach technical SEO, but it still builds on your current efforts. You’ll want to automate as much as you can, especially in a world with millions of custom GPTs.
Manual optimization? Ditch it for something that scales without requiring endless man-hours.
Technical SEO was long the core of ranking a site and ensuring you provided search bots and crawlers with an asset that was easy to crawl and index.
Now? It’s shifting.
Your site must become the de facto source of truth for the world’s models, and this is only possible by using the tools at your disposal.
Start with your robots.txt and work your way up to structure, fragmented data, and extractability. Audit your success over time and keep tweaking your efforts until you see positive results. Then, scale with automation.
In 1998, submitting a website to search engines was manual, methodical, and genuinely tedious. I remember 17 of them: AltaVista, Yahoo Directory, Excite, Infoseek, Lycos, WebCrawler, HotBot, Northern Light, Ask Jeeves, DMOZ, Snap, LookSmart, GoTo.com, AllTheWeb, Inktomi, iWon, and About.com.
Each had its own form, process, and wait time, and its own quiet judgment about whether your URL was worth including. We submitted manually, 18,000 pages in all. Yawn.
Google was barely a year old when we were doing this. But they were already building the thing that would make submission irrelevant.
PageRank meant Google followed links, and a site that other sites linked to would be found whether it submitted or not. The other 17 engines waited to be told about content. Google went looking, and within a few years, they got so good at finding content that manual submission became the exception rather than the norm.
You published, you waited, the bots arrived. For 20 years, that was the deal, and SEO optimized for a crawler that would show up sooner or later.
The irony is that we’re now shifting back. Not because Google got worse at finding things, but because the game has expanded in ways that pull alone can’t cover, and the revenue flowing through assistive and agentic channels doesn’t wait for a bot.
Pull isn’t the only entry mode
The pull model (bot discovers, selects, and fetches) remains the dominant entry mode for the web index. What’s changed is that pull is now one of five entry modes into the AI engine pipeline (the 10-gate sequence through which content passes before any AI system can recommend it), not the only one.
The pipeline has expanded, and new modes have been added alongside the existing model rather than replacing it, and the single entry mode that has been the norm for 20 years has expanded to five.
What follows is my taxonomy of those five modes, with an explanation of the advantages each one gives you at the two gates that determine whether content can compete: indexing and annotation.
The five entry modes differ by gates skipped, signal preserved, and revenue reached
Mode 1: Pull model
Traditional crawl-based discovery where all 10 pipeline gates apply and the bot decides everything. You start at gate zero and have no structural advantage by the time your content gets to annotation (which is where that content starts to contribute to your AI assistive agent/engine strategy). You’re entirely dependent on the bot’s schedule and the quality of what it finds when it arrives.
Mode 2: Push discovery
The brand proactively notifies the system that content exists or has changed, through IndexNow or manual submission.
Fabrice Canel built IndexNow at Bing for exactly this purpose: “IndexNow is all about knowing ‘now.’” It skips discovery, improves the chances of selection, and gets you straight to crawl. The content still needs to be crawled, rendered, and indexed, because IndexNow is a hint, not a guarantee.
You win speed and priority queue position, which means your content is eligible for recommendation days or weeks earlier than a competitor who waited for the bot. In fast-moving categories, that window is the difference between being in the answer and being absent from it.
Note: WebMCP helps with Modes 1 and 2 by making crawling, rendering, and indexing more reliable, retaining signal and confidence that would otherwise be lost through those three gates.
Because confidence is multiplicative across the pipeline, a higher passage rate at crawling, rendering, and indexing means your content arrives at annotation with significantly more surviving signal than a standard crawl delivers. The structural advantage compounds from there.
Mode 3: Push data
Structured data goes directly into the system’s index, bypassing the entire bot phase. Google Merchant Center pushes product data with GTINs, prices, availability, and structured attributes. OpenAI’s Product Feed Specification powers ChatGPT Shopping that supports 15-minute refresh cycles.
Discovery, selection, crawling, and rendering don’t exist for this content, and the “translation” at the indexing phase is seamless: it arrives at indexing already in machine-readable format, four gates skipped and one improved. That means the annotation advantage is significant.
This is where the money is for product-led businesses: where crawled content arrives as unstructured prose the system has to interpret and feed content arrives pre-labeled with explicit machine-readable entity type, category, and attributes. By structuring the data and injecting directly into indexing, you’re solving a huge chunk of the classification problem at annotation, which, as you’ll see in the next article, is the single most important step in the 10-gate sequence.
As the confidence pipeline shows, each gate that passes at higher confidence compounds multiplicatively, so this is where you can get the “3x surviving-signal advantage” I outline in “The five infrastructure gates behind crawl, render, and index.”
Mode 4: Push via MCP
Model Context Protocol (MCP) — a standard that lets AI agents query a brand’s live data during response generation — allows agents to retrieve data from brand systems on demand.
In February 2026, four infrastructure companies shipped agent commerce systems simultaneously. Stripe, Coinbase, Cloudflare, and OpenAI collectively wired a real-time transactional layer into the agent pipeline, live with Etsy and 1 million Shopify merchants.
Agentic commerce is key. MCP skips the entire DSCRI pipeline and then operates at three levels, each entering the pipeline at a different gate:
As a data source at recruitment.
As a grounding source at grounding.
As an action capability at won, where the transaction completes without a human in the loop.
The revenue consequences are already real: brands without MCP-ready data are losing transactions to those with it, because the agent can’t access their inventory, pricing, or availability in real time when it needs to make a decision. This is where you see multi-hundred percent gains in the surviving signal.
MCP is already simultaneously push and pull, depending on context.
There’s a dimension to Mode 4 that most people don’t think about much: the agent querying your MCP connection isn’t always a Big Tech recommendation system. It’s increasingly the customer’s own AI, acting as their purchasing agent, evaluating your inventory and pricing in real time, with their credit card behind the query, completing the transaction without them opening a browser.
When your customer’s agent (let’s say OpenClaw-driven) comes knocking, agent-readable is the entry requirement. Agent-writable — the capacity for an agent to act, not just retrieve — is where you’ll make the conversion. The brands without writable infrastructure will be losing transactions to competitors whose systems answered the query and handled the action.
Mode 5: Ambient
This is structurally different from the other four. Where Modes 1 through 4 change how content enters the pipeline, ambient research changes what triggers execution of the final gates.
The AI proactively pushes a recommendation into the user’s workflow without any query: Gemini suggesting a consultant in Google Sheets, a meeting summary in Microsoft Teams surfacing an expert, and autocomplete recommending your brand.
Ambient is the reward for reaching recruitment with accumulated confidence high enough that the system fires the execution gates on the user’s behalf, without being asked. You can’t optimize for ambient directly. You earn it — and the brands that earn it capture the 95% of the market that isn’t actively searching.
Several people have told me my obsession with ambient is misplaced, theoretical, and not a real thing in 2026. I’ve experienced it myself already, but the clearest demonstration came at an Entrepreneurs’ Organization event where I was co-presenting with a French Microsoft AI specialist.
He demonstrated on Teams an unprompted push recommendation: a provider identified as the best solution to a problem his team had been discussing in the meeting. Nobody explicitly asked. Copilot listened, understood the problem, evaluated options, and push-recommended a supplier right after the meeting. Ambient isn’t theoretical. It’s running on Teams, Gmail, and other tools we all use daily, right now.
Five entry modes, each with a different starting point, and they all converge at annotation. Annotation is the key to the entire pipeline. Every algorithm in the algorithmic trinity (LLM + knowledge graph + search) doesn’t use the content itself to recruit, it uses the annotations on your chunked content, and nothing reaches a user without being recruited.
Why is that important? Because accurate, complete, and confident annotation drives recruitment, and recruitment is competitive regardless of how content entered. A product feed arriving at indexing with zero lost signal competes at recruitment with a huge advantage over every crawled page, every other feed, and every MCP-connected competitor that entered by a different door.
You control more of this competition than most practitioners assume, but skipping gates gives you a structural advantage in surviving signal. It doesn’t exempt you from the competition itself.
That distinction matters here because annotation sits at the boundary. It’s the last absolute gate: the system classifies your content based on your signals, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated. That makes annotation the last moment in the pipeline where you have the field entirely to yourself.
From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in a winner-takes-all race. Get annotation right, and you have a significant head start. Get it wrong, and no matter how much work you do to improve recruitment, grounding, or display, it will not catch up, because the misclassification and loss of confidence compound through every gate downstream.
Nobody in the industry was talking about this in 2020. I started making the point then, after a conversation on the record with Canel, and it still isn’t getting the attention it deserves.
Annotation is your last chance before competition arrives.
Search is one of three ways users encounter brands — and it’s the least valuable
The research modes on the user’s side have expanded, too. The SEO industry has traditionally focused on just one: implicit, when the user types a query. There was always one more: explicit brand queries, and now we have a third. Each research mode is defined by who initiates and what the user already knows.
Explicit research is the deliberate query, where the user asks for a specific brand, person, or product, and the system returns a full entity response (the AI résumé that replaces the brand SERP).
This is the lowest-confidence mode of the three, because the user has already signaled very explicit intent: you’re only reaching people who already know your name. Bottom of the funnel, decision. Algorithmic confidence is important here to remove hedging (“they say on their website,” “they claim to be…”) and replace it with absolute enthusiasm (“world leader in…,” “renowned for…”).
Implicit research removes the explicit query. The AI introduces the brand as a recommendation (or advocates for you) within a broader answer, and the user discovers the brand because the system considers it relevant to the conversation, staking its own credibility on the inclusion. Top- and mid-funnel, awareness and consideration. Algorithmic confidence is vital here to beat the competition and get onto the list when a user asks “best X in Y market” or be cited when a user asks “explain topic X.”
Ambient research requires the highest confidence of all. The system pushes the brand into the user’s workflow with no query, no explicit request, the algorithm is making a unilateral decision that this user, in this context, at this moment, needs to see your brand. That requires very significant levels of algorithmic confidence.
The format is small: a sentence, a credential, a contextual mention. The audience reached is the largest: people not yet in-market, not yet actively looking, who encounter your brand because the AI decided they should. And the kicker is that your brand gets the sale before the competition even starts.
For me, this is the structural insight that inverts how most brands prioritize, and where the real money is hiding. They optimize for implicit research, where competition is highest, the target you need to hit is widest, and the work is hardest.
Most SEOs underestimate explicit research (where profitability is highest) and completely ignore ambient, which reaches the 95% who aren’t yet looking and requires the deepest entity foundation to trigger. I call this the confidence inversion, first documented in May 2025: the smallest format requires the highest investment, and it reaches the most valuable audience.
The entity home website is the single source that feeds every mode
In 2019, AI engineers spent 80% to 90% of their time collecting, cleaning, and labeling data, and the remaining 10% to 20% on the work they actually wanted to do. They wryly called themselves data janitors. Today, Gartner estimates 60% of enterprises are still effectively stuck in the 2019 model, manually scrubbing data, while the companies that got organized early compound their advantage.
The same split is happening with brand content and entity management, for the same reason. Every push mode described in this article draws on data: product attributes for merchant feeds, structured entity data for MCP connections, and corroborated identity claims for ambient triggering.
If that data lives in scattered, inconsistent, contradictory sources, every push attempt is expensive to implement, structurally weak on arrival, and liable to contradict the previous one. Inconsistency is the annotation killer: the system encounters two different versions of who you are from two different push moments, and confidence drops accordingly.
The framing gap, where your proof exists but the algorithm can’t connect it to a coherent entity model, is a direct consequence of disorganized data, and it costs you in recommendation frequency every day it persists.
The entity home website — the full site structured as an education hub for algorithms, bots, and humans simultaneously, built around entity pillar pages that declare specific identity facets — becomes the single source that feeds every mode simultaneously.
Pull, push discovery, push data, MCP, and ambient all draw from the same clean, consistent, non-contradictory data. You build the structure once, maintain it in one place, and you’re ready for push and pull modes today, and any to come that don’t yet exist.
AI handles 80%, humans protect the other 20%
That foundation is only as strong as the corrections made to it. How this works in practice depends on where you’re starting from. For enterprises, the website typically mirrors an internal data structure that already exists:
Product catalogs.
CRM records.
Service definitions.
Organizational hierarchies.
The website becomes the public representation of structured data that lives inside the business, and the primary challenge is integration and maintenance.
For smaller businesses and personal brands, the direction often runs the other way: building the entity home website well is what forces you to figure out how your business is actually structured, what you genuinely offer, who you serve, and how everything connects. The website imposes discipline.
We’re doing exactly this: centralizing everything as the structured data representation of the entire brand (personal or corporate). Getting the foundation right (who we are, what we offer, who we serve) is generally the heaviest lift. Building N-E-E-A-T-T credibility on top of that foundation is now comparatively straightforward, and every new push mode draws from the same organized source.
Here’s where using AI fits into this work. It can handle roughly 80% of the organization: extracting structure from existing content, proposing taxonomies, drafting entity descriptions, mapping relationships, and flagging gaps. What it does poorly, and what humans need to correct, are the three failure modes that propagate silently through every downstream gate:
Factual errors, where something is simply wrong.
Inaccuracies, where something is approximately right but imprecise enough to mislead.
Confusions, where two different concepts are conflated, or an entity is ambiguous between interpretations.
Confusion is the sneakiest because it looks like data, passes automated quality checks, enters the pipeline with apparent confidence, and then causes annotation to misclassify in ways that compound through every gate downstream.
Alongside the errors sit the missed opportunities, which are equally costly and considerably less obvious:
Lost N-E-E-A-T-T credibility opportunities, where the systems underestimate or undervalue the entity because credibility signals exist but aren’t structured, corroborated, or framed in a way the algorithmic trinity can read. The authority exists, but the machine doesn’t understand it.
Annotation misclassification, where the entity is indexed coherently but placed in the wrong category, meaning it competes for the wrong queries entirely and never appears in the contexts where it should win. Correctly classified competitors take the recommendation: your brand is present in the pipeline, but absent from the competition that matters to your business.
Untriggered deliverability, where understandability is solid and credibility has crossed the trust threshold, but topical authority signals haven’t accumulated densely enough to push the entity across the deliverability threshold for proactive recommendation. The machine knows who you are and trusts you. It just doesn’t advocate for you yet.
The human doing the correction and optimization work is the competitive advantage. Because the errors are surreptitious and the opportunities non-obvious, the trick is finding where both actually are, fixing one, and acting on the other.
The errors are surreptitious. The opportunities are non-obvious. Finding both is the work that compounds.
Organize once, feed every mode that exists and every mode to come
The push layer is expanding. The brands that organize their data now — not perfectly, but consistently, and with a system for maintaining it — are building the infrastructure from which every current and future entry mode draws.
The brands still publishing and waiting for the bot (Mode 1) are optimizing for the least advantageous mode in a five-mode landscape, and that disadvantage gap widens with every passing cycle.
This is the seventh piece in my AI authority series.
OpenAI now allows users of ChatGPT to share their device location so that ChatGPT can know more precisely where the user is and serve better answers and results based on that location.
The feature is called location sharing, OpenAI wrote, “Sharing your device location is completely optional and off until you choose to enable it. You can update device location sharing in Settings > Data Controls at any time.”
What it does. If ChatGPT knows your location, it can return better local results. OpenAI wrote:
“Precise location means ChatGPT can use your device’s specific location, such as an exact address, to provide more tailored results.”
“For example, if you ask “what are the best coffee shops near me?”, ChatGPT can use your precise location to provide more relevant nearby results. On mobile devices, you can choose to toggle off precise location separately while keeping approximate device location sharing on for additional control.”
Privacy. OpenAI said “ChatGPT deletes precise location data after it’s used to provide a more relevant response.” Here is how ChatGPT uses that information:
“If ChatGPT’s response includes information related to your specific location, such as the names of nearby restaurants or maps, that information becomes part of your conversation like any other response and will remain in your chat history unless you delete the conversation.”
Does it work. Does this work? Well, maybe not as well as you’d expect. Here is an example from Glenn Gabe:
I shared about the "Near Me ChatGPT Update" the other day and just let ChatGPT use my device location. This is supposed to enhance results for local queries. I just asked for the "best steakhouses near me" and several of the restaurants are ~45 minutes away. Both restaurants… pic.twitter.com/gRkMeuzMQt
Why we care. Making ChatGPT local results better is a bit deal in local search and local SEO. Knowing the users location and better yet, precise location, can result in better local results.
Hopefully this will result in ChatGPT responding with more useful local results for users.
Google Business Profile (GBP) may be getting shoved down the SERPs by ads and AI Overviews more than ever, but it’s still a top source of inbound leads for local businesses — and one of the fastest ways to improve rankings with simple fixes.
Here’s a five-step audit to find and fix the gaps most businesses miss.
1. Evaluate Google review velocity and recency
It’s a common misconception that the business with the most Google reviews wins in Google Maps ranking. While a high review count provides social proof, Google’s algorithm has more of a “what have you done for me lately?” attitude.
The number of reviews you get a month, and how recent your last review was, often outweigh the total count for all important map pack positions. We call these metrics review velocity and review recency.
Think about it like this: If you have 500 reviews but haven’t received a new one since 2024, a competitor with 100 fresh reviews from the last month will likely blow past you.
So, how do you measure your review velocity and recency? Analyze competitors to see how top-ranking businesses perform on those metrics.
Follow these steps:
Run a geo-grid ranking scan: Identify which competitors are outranking you for your top keywords.
Analyze the last 30 days: Note how many reviews they received this month, and when their most recent one was posted.
Benchmark your data: Create a simple table comparing your monthly count and recency.
Recommended tools: Places Scout, Local Falcon, or Whitespark for automated grid scans and review data.
You don’t just need more reviews. You need to match or exceed the consistency of top-ranking listings.
You can automate this with Places Scout API data. That’s what our agency does, tracking it consistently to keep clients ahead of competitors. Automated charts make it easier to see how you stack up.
Including keywords in your business name is one of the most powerful local ranking signals. Sometimes a profile will rank in the map pack based solely on its name, beating out businesses with better reviews and higher recency.
Google’s algorithm hasn’t fully filtered out this type of keyword targeting, so it remains an opportunity. Take this business: only 21 reviews, yet it ranks first in the map pack for an extremely competitive term, thanks to the keywords in its business name.
You can’t simply keyword-stuff your name, though. Google can verify your legal name and take action to remove keywords from your profile — or worse, require reverification or suspend it. Your best option is a legal DBA (doing business as) certificate, also known as a trade name, or fictitious name certificate, in some areas.
For example, if your legal name is “Smith & Sons,” you’re missing out. Registering a DBA as “Smith & Sons HVAC Repair” allows you to update your GBP name while technically adhering to Google’s guidelines.
Competitor analysis: Are your competitors outranking you simply because their name contains the keyword? If yes, you need to take action to match those tactics.
Make it legal: Check your local Secretary of State website. Filing a DBA is an effective SEO tactic for moving from Position 4+ into the map pack for certain keywords.
Update business website: Update your website with the new name. Google uses website content to verify business details and may update your GBP accordingly. Make sure it only finds the new name, not outdated versions.
Choosing the wrong primary category for your GBP is a leading reason businesses fail to rank. If you’re a personal injury lawyer, but your primary category is set to “trial attorney,” you’re fighting an uphill battle to rank for those highly competitive terms like “personal injury lawyer” searches.
How to pick the best primary category:
Competitor analysis: Use Chrome extensions like Pleper or GMB Everywhere to see exactly which primary categories the top-ranking businesses are using.
Max out secondary categories: You have 10 total slots. Fill all of them with relevant subcategories.
Check off all relevant services: Under each category, Google lists specific services. Select the ones relevant to your business.
Many businesses link their GBP to their homepage and stop there. For multi-location businesses, this is a mistake. You should link to a dedicated local landing page optimized for your top keywords that mentions the city your GBP address is in.
Linking your GBP to a hyper-local city page (e.g., /tampa-plumbing/ instead of the homepage) reinforces “entity alignment.” When the information on your GBP matches a unique, highly relevant page on your site, Google’s confidence in your location increases, often leading to a jump in the local pack. Make sure your GBP landing page is optimized with all your services and links to dedicated service pages to boost your listing for service-specific searches.
Watch out for the diversity update. Sometimes a business ranks well in the map pack, but its website is nowhere to be found in organic results. This is often due to Google’s diversity update.
If you suspect you’re being filtered out organically, try linking your GBP to a different localized interior page. This is often a quick fix that helps your site reappear in organic search. Here’s an example of a client I recently helped beat the diversity update with a simple GBP landing page swap.
Your business’s physical location within the city and its proximity to the city center are extremely strong ranking signals. It’s not something you can easily manipulate, though, because it’s not always easy to move your office, store, or warehouse. However, you need to know your “ranking radius” and how much room there is to improve rankings for certain keywords within it.
Identify the ranking ceiling in your market. I use Local Falcon’s Share of Local Voice (SoLV) metric to do this. If your top competitors only have a 53% SoLV, as in this example, it’s unlikely you’ll be able to get more than that either.
This shows when you’ve “maxed out” a keyword and need to target new keywords or open a new location outside that radius. It can also show there’s room to improve — and that you need to increase your SoLV score.
Keep in mind that certain keywords are harder to improve based on where your business is physically located. If you’re not physically located within a city’s borders, and your map pin sits anywhere outside the Google-defined border of your city, you will struggle to rank for explicit terms like “Plumber Tampa FL,” and within the city borders in general. Always do this analysis on a keyword-by-keyword basis.
Tip: In the current local search landscape, expanding your physical footprint, and verifying more GBPs, is the most reliable way to grow visibility. Max out your current GBPs first, then look for your next location.
This is a strong starting point, but it’s just the beginning. From review strategy and category selection to city borders and the diversity update, every detail counts.
Between overreaching ads and ever-expanding AI Overviews, staying proactive with your GBP strategy is the only way to keep your leads flowing from the map pack. Build your GBP foundation, max out your current locations, and strategize new locations to keep your business in the top spot across your service area.
AI search engines like ChatGPT, Google AI Mode, and Perplexity are changing how consumers discover and purchase products online. If your product pages aren’t optimized for these AI assistants, you could be missing out on a growing source of traffic and revenue.
The challenge? AI assistants don’t evaluate product pages in the same way traditional search engines do. They need to fully understand your products so they can confidently recommend them to different users with different needs.
To help you assess how well your product pages are optimized for AI search, here’s a simple scorecard covering the six most important factors.
1. Product specifications
Does the product page clearly display the product’s attributes and specifications?
AI assistants need clearly stated specifications to better understand your products and match them to customer needs. If a shopper asks an AI assistant for “an airline-friendly crate for a 115-pound dog,” the AI must be able to see the maximum weight limit of a product before it will recommend it. Without clear specifications, some products won’t get recommended, even if they’re actually a perfect match.
Amazon does this really well, and it’s likely one of the many keys to their strong performance in AI search. Just look at all the helpful specifications they clearly lay out on their product pages.
Action item: Go through your product pages and make certain all applicable specifications are clearly displayed. Don’t bury them in the main product description or other marketing copy. Clearly lay them out in a structured table or bulleted list.
Are the product’s unique benefits clearly described?
AI needs to understand both what makes your product stand out and why your products should be recommended over the competition. If a product page reads like every other industry website, AI assistants have no compelling reason to recommend the listed products.
Think about it from the AI’s perspective: If a user asks “what’s the best L-shaped sofa,” the AI will look for products with clear differentiators (hidden storage, machine-washable, modular parts, durability, etc.). The characteristics that make your product stand out should be explicitly stated on the page.
Here’s a great example from Home Reserve. Their product pages have a section called “Key Features” that lists the unique selling points that separate them from the competition.
Action item: Make sure your product pages clearly state what makes them better and why it matters to the customer. Keep your key features specific. Generic selling points like “high-quality craftsmanship” or “premium materials” are too vague and don’t give AI assistants enough information to establish a clear differentiation.
Are the product’s intended use cases and audience clear?
AI assistants don’t match products to keywords — they match products to people and their unique needs. When a user asks ChatGPT, “what’s the best desk for a small apartment,” the AI looks for products intended for compact spaces, small rooms, or apartment living.
If a product page only describes the desk’s dimensions without connecting them to a particular use case, AI assistants may not recommend the product when users ask about those scenarios.
Any given product could have a multitude of use cases and audiences. A standing desk could be ideal for remote workers, people with back pain, gamers, or small business owners outfitting a home office. If a product page only speaks to one of these audiences, it might not get recommended to the others in AI search.
Action item: For each product, include the top three to five specific use cases or audience segments on the page. Go beyond demographics and think about situations, pain points, and goals.
Does the product page include an FAQ section answering common questions about the product?
AI assistants always try to connect products with the right buyer. When a user asks a question like, “what’s the best waterproof sealant for a flat roof,” the AI looks for information on product pages demonstrating they’re a good fit for the particular use case.
This is what makes FAQ content so valuable. A well-structured FAQ section can give AI assistants additional confidence that the product is a good fit for the user and worthy of a mention. The more specific and detailed your FAQ answers are, the more prompts your product can match within AI search.
For example, Liquid Rubber sells mulch glue and waterproof sealants. They do a great job of providing a clear list of frequently asked questions on their product pages.
This type of FAQ content can help their products get recommended more often when users ask ChatGPT specific questions:
What’s the best VOC mulch glue?
Can I get mulch glue that will last up to 12 months?
Is there a mulch glue that delivers within one week?
Action item: Review your customer support inquiries, product reviews, competitor pages, and relevant Reddit threads to identify the most common customer questions. Then add these questions directly to your product pages with clear and concise answers.
Does the product page display customer ratings and review counts?
AI assistants will recommend highly rated products with strong reputations. A product with 500+ reviews and a 4.8-star rating is a much safer recommendation than a product with zero reviews or a low rating.
Just ask ChatGPT for product recommendations, and you’ll see the product ratings front and center. Take, for example, the prompt, “What’s the best medium roast caramel flavored coffee?”
It’s clear that ChatGPT relies heavily on product reviews and only recommends products with a high rating. When you click on any of these products, you’ll see that product ratings and the number of reviews are clearly displayed on the product page.
Note: Your product’s rating in ChatGPT may differ from what’s on your product page. This is because ChatGPT calculates an aggregate rating across multiple merchants (e.g., Walmart, Target, etc.), rather than only pulling from your product page.
But having a strong rating isn’t enough — you need a lot of reviews as well. I recently reviewed 1,000 ecommerce-focused prompts and found that the median number of reviews was 156. So, if you want to increase your chances of getting recommended by ChatGPT (and other AI assistants), aim for at least 150+ product reviews.
Action item: Make sure your product pages clearly display customer ratings, review counts, and (ideally) some actual reviews. Third-party review platforms like Yotpo, Judge.me, and Shopper Approved can solicit product reviews from customers for you.
Does the product page include structured data for price, availability, reviews, and other key attributes?
It’s easier for AI search engines to understand information presented in a clear structure (e.g., tables, lists). But there’s nothing more structured than the JSON format for structured data (also known as schema markup).
There’s a common claim in AI SEO that structured data is some kind of magic bullet for AI visibility. The reality is more nuanced.
Structured data experiment
An interesting experiment conducted by SEO consultant Dan Taylor tested the impact of structured data on AI search. He included a physical address for a made-up company in the JSON-LD structured data, but didn’t include it anywhere in the page content itself. Then, when he asked ChatGPT for the address, it still pulled it from the structured data.
This experiment shows that AI assistants are indeed crawling structured data. But they’re not necessarily parsing it the same way a traditional search engine would. Instead, they’re simply treating it as another source of text on the page.
If the content in your schema is relevant to a user’s prompt, AI assistants will pick it up. But it doesn’t matter whether the schema is valid or completely made up.
Where structured data helps most
So, if AI assistants treat structured data like any other text, is it still worth adding it to your product pages? The short answer is “yes.”
Presenting important product information clearly and well formatted can always help AI assistants understand your product pages. But the real advantage is in the product cards found within the AI responses.
So, the main advantage of structured data is how it plays into Google’s Knowledge Graph of products, which can directly impact product recommendations across Google AI Overviews, AI Mode, and even ChatGPT.
With the rise of agentic commerce, product data will only become more important as AI agents rely on it to compare, evaluate, and even purchase products on behalf of users.
Here’s a quick overview you can use to audit your product pages:
Once you’ve scored your highest-priority pages, any gaps become the priority on your AI product optimization roadmap. Tackle the “No” items first, since those represent the biggest missed opportunities, then work on upgrading the “Partial” scores.
This type of product optimization is still a blind spot for many ecommerce brands, which means every factor you improve is a chance to get recommended where they don’t. The sooner you close these gaps, the harder it becomes for competitors to catch up.
Google removed a Search Engine Land article (Report: Clickout Media turned news sites into AI gambling hubs, published March 26) from its search results after a copyright complaint (that appears, to us, to be entirely false). Meanwhile, a similar DMCA filing led to the takedown of the original Press Gazette investigation.
What happened. A DMCA notice filed March 27 claimed Search Engine Land copied content “word for word” and used proprietary images.
The complaint led Google to begin removing the article from search results globally.
The notice identified the complainant as “US Webspam,” with no clear public attribution.
The context. The removed article reported that Clickout Media allegedly used expired or acquired domains to publish AI-generated gambling content.
The claim details. Here’s the message we received via Google Search Console on March 27:
Description of claim: The infringing news website has blatantly and willfully violated copyright law by copying our entire content word for word, including all images, which are solely owned by our company. This includes the complete replication of our original written material, as published on our official website, along with the proprietary visuals accompanying it. Despite multiple good-faith efforts to resolve this matter amicably, the infringing party (hereinafter referred to as “Infringer”) continues to unlawfully publish and distribute our copyrighted content without permission. This is a direct and flagrant breach of our rights and a clear violation of Google’s copyright policies. We hereby demand the immediate removal of this infringing material from Google search results to protect our intellectual property.
What doesn’t add up. The Search Engine Land article contains no images, contradicting the complaint. Also:
A search of its text shows no evidence of copied content.
The notice claims “multiple good-faith efforts” to resolve the issue, but no outreach was received before filing.
The complaint was submitted one day after publication.
What Google says. Google’s standard policy is to remove content upon receiving a valid copyright complaint, with an option for publishers to file a counter notice. The company has not commented on this specific case.
Why we care. This shows how DMCA takedowns can be weaponized to suppress reporting, including coverage of search spam and site reputation abuse. Legitimate content can be temporarily removed from search results due to unverified claims, and the resolution can take weeks or longer.
What’s next. We’ll watch whether this article is DMCA’d and removed, along with the Press Gazette’s, and anyone else covering the story.
Reactions. Here’s some reaction from X:
theholycoins isn’t owned by clickout (it’s one of the sites that would actually do negative reporting into their scams, so they probably picked one of those posts and said they were them/the original author of your dmca’d piece)
the rabbit hole on clickout goes a lot deeper than…
I'm surprised this was approved by Google… I've seen them come back with rejected DMCA notices when it was clear the site was infringing copyright. This is a BS DMCA takedown that doesn't even make sense. Very interesting case… I have a feeling the article will surface again… https://t.co/Zi8hUV8g14
Last week @pressgazette published an investigative report about a media company that acquires online publishers and exploits their domain authority for SEO shenanigans.
Update, March 31. The Press Gazette and Search Engine Land articles, which were removed due to the bogus DMCA complaints, are now back in Google Search.
Microsoft Advertising now allows e-commerce merchants to edit their Merchant Center store name and domain directly within the platform — no support ticket required.
Why we care. Store details like names and URLs change as businesses rebrand or restructure. Previously, updating these required manual intervention. Self-serve control reduces friction and keeps campaigns running more smoothly during transitions.
How it works — the details:
Store name changes go through editorial review before going live. During review, ads keep running under the existing approved name — so there’s no interruption to campaigns.
Domain/URL changes require merchants to verify ownership of the new domain before the switch takes effect. Ads continue serving on the old domain in the meantime. Once approved, product URLs must be updated to reflect the new domain.
Reusing names or domains is allowed — as long as the store name clears editorial checks and the domain is verified and confirmed as merchant-owned.
The bottom line. The update gives ecommerce advertisers more autonomy over their store settings while building in safeguards — editorial review and domain verification — to prevent abuse and maintain ad quality.
Reddit today opened its Pro publishing tools to all publishers, removing the waitlist and offering free access in a public beta to expand distribution and engagement.
Why we care. Reddit Pro gives you a centralized tool to track where your content spreads, streamline posting, and find the right communities. It transforms Reddit from a manual posting exercise into a structured distribution channel.
The details. You can now sign up for Reddit Pro, verify your domain (typically within three business days), and access the Links tab. With Reddit Pro, you can:
Track where your content is shared across Reddit.
Auto-import articles via RSS for quick posting.
Get AI-powered recommendations on relevant communities.
Reddit also added features based on early feedback:
Community snapshots show rules, stats, and top discussions.
Community notes let you track strategy and context.
By the numbers. Reddit reported more than 55 billion views of publisher-related conversations in 2025. Publishers testing since September saw:
Median post views up 46%.
Profile views nearly doubled.
Median comments up 48%.
What else. Reddit is expanding profile flairs to all Pro users, letting you organize posts on your profile so users can browse coverage and engage with stories.
A bug in Google Ads Editor is causing structured snippet extensions copied between accounts to remain unintentionally linked. When advertisers change the language in one account, it can automatically update the same extension in another.
Why we care. This bug creates hidden inconsistencies for advertisers managing multi-market campaigns, especially when different languages are required across accounts.
What advertisers are seeing. The issue surfaced while managing Czech and Slovak e-commerce accounts by digital marketer Marcin Wsół. Changing the snippet language in one account triggered the same change in the other.
The extensions appear separate but behave as if synced.
Zoom in. Using the Google Ads web interface can temporarily correct the issue, however, further edits in Editor may cause the language settings to toggle again.
Also. The bug isn’t limited to cross-account use. PPC News Feed founder, Hana Kobzová, founder that copying structured snippets within the same account can also lead to incorrect language settings after edits.
Between the lines. Advertisers relying on bulk edits in Editor may unknowingly overwrite localization settings, leading to mismatched messaging across markets.
Bottom line. Until fixed, advertisers should double-check structured snippet languages after copying or editing in Google Ads Editor—especially when working across accounts or regions.
First seen. This error was first picked up by Wsół, which was picked up by PPC News Feed.
Google says a new compression algorithm, called TurboQuant, can compress and search massive AI data sets with near-zero indexing time, potentially removing one of the biggest speed limits in modern search systems.
What it is. TurboQuant is a way to shrink and organize the data that powers AI and search without losing accuracy. It reduces memory use while keeping results precise and cuts the time to build searchable AI indexes to “virtually zero,” according to the research paper.
How it works. Modern search converts content into vectors (lists of numbers that represent meaning). Similar ideas sit close together in this numeric space, and search finds the closest matches.
However, these vectors are large and expensive to store and search. TurboQuant addresses this by using much smaller data that behaves almost exactly like the original, through:
Smart compression. It rotates the data mathematically to compress it cleanly, like organizing messy items into neat boxes.
Error correction. It adds a 1-bit signal to fix small compression errors and preserve accuracy.
What it means. Vector search — the system behind semantic search and AI answers — has been slow and expensive at scale. TurboQuant makes it faster and cheaper. Google says it enables faster similarity search, lower memory costs, and real-time processing of massive datasets.
Why we care. Google can evaluate far more documents per query, not just a small subset. If/when Google adopts this in Search, AI Overviews could pull from a broader, more precise set of sources, making it easier to generate instant summaries from large data pools.
In long sales cycles, a lot of what happens after lead submission involves people. When you optimize campaigns to final sales, you’re teaching the ad platform to respond to how well the sales team performed that month rather than lead quality, and that’s a problem no amount of campaign changes will fix.
The common advice is to “optimize the full funnel” (i.e., track media spend to revenue, optimize campaigns to sales, etc.). But beyond lead capture, most of what drives sales has little to do with your paid media. It’s about who’s on the sales team, how busy they are, and dozens of other factors you can’t influence through targeting or creative.
When your sales team becomes the signal
I’ve spent over 15 years in financial services marketing, but this isn’t unique to mortgages or insurance. If your sales process relies heavily on people, you’ll recognize this immediately.
In most businesses, there’s someone like Dave. In my case, he’s a mortgage adviser, but in yours, he might be your top enterprise sales rep, your star business development manager, or your best project estimator.
He closes deals at twice the rate of his colleagues, not because he gets better leads, but because he’s naturally gifted at building rapport, asking the right questions, and guiding anxious customers through difficult decisions.
However, Dave isn’t always there. Sometimes he’s on vacation, sometimes he might leave the company for a better opportunity, or sometimes your business hires three more Daves.
The makeup of your sales team likely changes constantly. You might have more experienced closers one month, fewer the next, a recruitment drive that brought in several new starters, or Dave and two of his colleagues leaving within a month of each other. Sales rates can swing dramatically based purely on who’s in the office, regardless of lead quality.
This can lead to targeting problems. For example, when the conversion rate drops because Dave’s away and a junior team member is covering his accounts, the algorithm sees it as a targeting problem rather than a staffing issue.
If you’ve set your campaigns to optimize for sales, it thinks, “Our targeting stopped working. These clicks are lower-quality for this conversion action now. We should shift spend away from these audiences.”
Eventually, this could result in keywords that were previously working well being turned off, audiences that were driving sales volume no longer being bid for, and, eventually, a decline in the entire account’s performance. But the leads haven’t changed, only the team has.
Operational factors that distort your conversion data
It’s not just the sales team makeup either. Let’s say:
The team gets slammed in Q4 as everyone tries to close before year-end, response times stretch from two days to over a week, and customers get impatient and look elsewhere.
Perhaps market conditions shift, and your most competitive product gets pulled. Or summer vacations mean the team is running short-handed, and some leads go cold before anyone contacts them. Then September comes and everything bounces back to normal.
It goes beyond the day-to-day. Budget approvals get delayed, product ranges change, and planning delays push projects back. The specific reason varies by business, but the effect on your conversion data is always the same.
The algorithm ends up thinking targeting got worse when, in fact, the team was just busy with leads from other sources.
When Dave becomes a superhuman: The Santa Claus Rally
The Santa Claus Rally, also known as the December Effect, is the best example I’ve seen of how human behavior can throw off algorithmic targeting.
Every December in financial services, something strange happens. In the third week of December, conversion rates from lead to sale spike dramatically. We’ve seen increases of up to 150% compared to normal weeks.
If campaigns are optimized for sales, the algorithm thinks, “Whatever we’re doing this week is working incredibly well!” Then the holiday week arrives, and everything crashes, with conversion rates plummeting to a fraction of normal levels.
None of it has anything to do with paid media. In week three, Dave and his colleagues are in target-hitting panic mode. End-of-year bonuses are on the line, and there’s one final push before the holiday break, so they’re calling leads faster, following up more aggressively, and closing deals they might typically have let simmer. Dave is working like a machine.
Then the holiday week arrives, and everyone’s mentally checked out, customers aren’t answering phones, and Dave has finally taken time off. The team that’s still at work is thinking more about family get-togethers and less about targets.
The lead quality, targeting, and ads haven’t changed. The team is just working at different levels of intensity due to seasonality. The algorithm overpays for normal performance and underbids for identical audiences, purely based on when Dave and his team take their vacations.
So if optimizing for sales is being distorted by things outside your control, how should you draw the line? How can you balance this lead distortion and still drive the right type of leads?
The answer is your last point of control, which, for these kinds of sales, means at lead submission. But not just simply counting leads. Instead, value them based on both likelihood to convert and the commercial value of the end sale.
The other issue is that most high-value businesses only generate a handful of sales per month, which isn’t enough data for automated bidding to learn anything useful. Lead valuation also solves this issue by providing the platform with hundreds of conversion events rather than a few sales.
This means automated bidding can actually function properly, campaign and audience testing can become meaningful, and the data stays reliable. You’re optimizing to lead quality before Dave and the sales team get involved.
To be clear, importing downstream conversion stages or revenue into ad platforms can be extremely powerful. But optimization to those signals only works when volume is sufficient, conversion lag is manageable, and the sales process is stable.
The starting point is your historical data, ideally 12 months of it, though you can work with six. You need to understand which leads actually closed, what they were worth, and what they had in common at the point of inquiry.
For financial services, it’s things like loan amount and term. For B2B, it might be company size or sector. For construction, it’s usually project size and urgency.
From there, it’s about grouping leads by their likelihood to close to a sale and by what a typical deal size looks like, and then assigning each group an expected revenue value.
The check to make sure it’s working as expected is simple. The total estimated value you assign to your leads over a period should roughly match the revenue they actually generated. If not, the model needs work. Ideally, you should revisit it at least quarterly as your campaigns and operational factors change.
As an example, you might end up with a high-likelihood lead worth $850, a mid-range lead at $420, and a lower-likelihood lead at $120.
Once you have that, set up your conversion tracking to pass the expected value back to the platform on your conversion action and use value-based bidding (target return on ad spend in Google Ads) to point the algorithm toward the leads that are actually worth chasing.
“Optimize the full funnel” sounds sensible until you realize how much of that funnel you don’t actually control.
You can influence the targeting, the creative, the landing page, and the experience that gets someone to submit a form. After that, it’s over to Dave and the sales team, and dozens of other factors that have nothing to do with your campaigns.
When you expect an algorithm to optimize for things it can’t see, it will start drawing the wrong conclusions, chasing the wrong audiences, and getting worse over time.
The answer isn’t to stop measuring what happens after lead submission. You absolutely should continue measuring, as those numbers can tell you a lot about what’s going well and what might need to be corrected for. Remember:
When lead quality stays steady, but sales drop, that’s an operations issue, not a paid media one.
When both drop at the same time, look at your campaigns.
When sales spike, but lead quality is flat, that’s Dave having a great month, not your targeting.
That visibility is genuinely helpful, but it just shouldn’t be what you’re optimizing to.
Build lead valuation, feed expected values back to your platform, and let the algorithm do what it’s actually good at: finding people who look like your best leads. Leave the rest to Dave.
Know where your control ends, as that’s where optimization should stop.
Most business GPTs fail because they’re built like novelties rather than tools. They’re too broad, under-tested, and launched without a strategy, so they never become part of a team’s workflow.
I’ve built and audited 12+ custom GPTs across marketing, SEO, and sales teams. The pattern is consistent: a small number get used daily, while most collect dust.
Here’s how to build GPTs that do — from validating the right use case to structuring, testing, and launching in a way that drives real adoption.
At a glance: The 15-minute version
If you’re ready to jump in, you can start with these steps:
Pick one task your team does 3x+ per week that takes 15+ minutes.
Complete this sentence: “This GPT helps [role] do [task] by [method].”
Write instructions in the Configure tab, not the Create tab.
Upload a curated one- to two-page .md knowledge file, not a raw document dump.
Add four specific conversation starters. Users who see specific options are significantly more likely to engage than those facing a blank input field. If they can’t immediately see what to do, they leave.
Test with five questions before anyone else sees it.
Share with three teammates. Watch them use it. Iterate within 48 hours.
Want to see what a well-built business GPT looks like before building your own? Try Marketing Research & Competitive Analysis or MARKETING, both ranked in the GPT Store’s Research & Analysis category. I helped build these at Semrush and will reference them throughout, and they demonstrate the build patterns covered below.
What a business GPT actually is (and what it isn’t)
A business GPT is a custom version of ChatGPT configured to do one specific, recurring job for a defined role on your team. Not “an AI assistant.” Not “a helpful tool.” One job.
Think of it like hiring. A generalist can help with anything. A specialist who does one thing incredibly well is worth 10 times more for that specific task, because they’ve already internalized the context, the standards, and the constraints you’d otherwise have to explain every single time.
That’s what a well-built business GPT does. It already knows your brand voice, output format, and when to stop and escalate instead of guessing.
I’ve built and audited 12+ custom GPTs across marketing, SEO, and sales teams, and the pattern is consistent: the ones that get used daily are tightly scoped and predictable. The ones that aren’t collect dust.
The one-sentence test: If your GPT needs more than one sentence to explain what it does, the use case is still too broad. Narrow it until the answer is obvious.
“A GPT that drafts on-brand responses to negative customer reviews using our escalation framework” passes.
“A general customer support assistant” doesn’t.
That specificity is what makes it useful at the planning stage, where most marketing GPTs fall short.
The same pattern shows up across the best GPTs in the store. Most are novelties. These aren’t. Each demonstrates a build pattern you can apply.
Ranked No. 2 in Research & Analysis. Drop in a competitor, an industry, or a business challenge, and you’ll get structured frameworks, SWOT analyses, positioning gaps, and audience breakdowns backed by cited sources.
The build pattern worth noting: breadth within a defined domain. Most research GPTs do one thing. This one covers the full strategic stack, from competitive analysis to market research to strategic planning, without losing focus because the scope is bounded by “research and analysis” rather than “marketing” broadly.
Ranked No. 4 in Research & Analysis. Covers 14+ disciplines, including paid search, programmatic, out-of-home, influencer, and retail media.
The build spans the full media mix rather than specializing in one channel. It’s useful at the planning stage, where most marketing GPTs fall short. It also shows how conversation starters can guide users to high-value use cases immediately, rather than leaving them staring at a blank input field.
Consistently top five globally across all GPT Store categories. This is strongest for blog posts, articles, and long-form content.
The build uses front-loaded conversation starters to narrow scope at the session level rather than baking rigid constraints into the instructions. That makes it flexible enough to serve thousands of different users without losing focus.
Upload a CSV and receive charts, summaries, and insights without writing a single line of code. This is the clearest live demonstration of Code Interpreter used well.
This build demonstrates what the capabilities toggle actually unlocks in practice. Open it first if you want to convince a skeptical stakeholder.
Describe a workflow problem in plain English and receive specific Zapier automation recommendations.
The business model pattern here is as instructive as the build pattern: a tool-native GPT that generates qualified leads by solving the exact problem its parent product addresses. This is worth studying if you’re thinking about GPTs as a distribution channel, not just a productivity tool.
Create and edit designs, presentations, and social graphics through conversation.
Beyond the practical utility, Canva’s GPT is worth studying as a forward-looking example of where the category is heading. It has evolved from a simple GPT integration to a full native ChatGPT app integration, showing what a mature tool-native deployment looks like when a brand commits to the channel properly.
Validate before you build
The biggest waste in GPT development is building something nobody needed badly enough to actually use. Before writing a single line of instructions, score your idea across four dimensions.
Criteria
Low (1 point)
Medium (3 points)
High (5 points)
Frequency
Monthly or less
A few times/week
Multiple times daily
Time cost
Under 15 minutes
15-45 minutes
1+ hours each time
Consistency
Not critical
Moderate
Mission-critical
Context required
Generic info works
Some internal data
Deep internal knowledge
Score interpretation:
16-20 points: Build it this week.
10-15 points: Worth a prototype.
Below 10: Skip it. The ROI math won’t justify adoption.
The math is simple. A 45-minute task done five times per week is 16 hours per month. Anthropic’s November 2025 productivity research found that the median AI-assisted task delivered an estimated 84% time savings, with most tasks falling somewhere in the 50-95% range.
Even at the conservative end of that range, a well-scoped GPT returns eight to 12 hours per person per month on that one task alone. The St. Louis Fed’s October 2025 survey research backs this up: One-third of workers who use AI tools daily report saving at least four hours every single week. Multiply either number across a team, and the ROI case writes itself.
Tip: Audit your team’s weekly standup notes or Slack threads from the last 30 days. Tasks mentioned repeatedly (especially ones people complain about) are your best GPT candidates. They’re already annoying enough to surface unprompted, which means adoption motivation already exists.
Build it right with the 6-layer framework
Every effective business GPT is built on six layers. Skip one, and the output feels half-baked. Add unnecessary complexity to one, and adoption drops.
Layer 1: Use case (one job. Full stop.)
This is the filter every other decision runs through.
❌ A general coding assistant.
✅ A code reviewer that checks React components against our team's style guide.
❌ A marketing helper.
✅ A campaign brief generator that outputs our standard five-section brief format from a single one-line input.
If you find yourself adding “and also it should…” more than twice during the build, you need two GPTs, not one bigger one.
This is why Marketing Research & Competitive Analysis works. It could easily have tried to write copy, plan campaigns, and do SEO analysis. Instead, it stays in its lane: research and competitive intelligence. That constraint is what makes the output reliable enough to use in real strategy meetings.
Layer 2: Instructions (your most important investment)
Most people underinvest here by an order of magnitude. Your system prompt isn’t a description of what the GPT does. It’s the operating system that controls how it thinks, behaves, and responds.
A weak system prompt produces generic, unreliable output. A strong one turns a blank ChatGPT into a domain expert.
Go straight to the Configure tab. ChatGPT’s conversational builder (the “Create” tab) is fine for quick setup but gives you almost no control over formatting, behavior rules, or conditional logic. The Configure tab is where you actually build the thing.
If you’re already using ChatGPT for SEO workflows, you know how much the quality of your prompts determines the quality of the output. The same principle applies tenfold with system instructions. For a deeper dive on prompt construction for SEO specifically, check out our guide to ChatGPT for SEO.
Structure your instructions in this order:
Role definition: Who is this GPT? What’s its point of view? What does it know deeply?
Behavioral guidelines: What should it always do? What should it never do?
Output format: How should responses be structured? What’s the ideal length? Tables, bullets, prose?
Brand voice: What language does your brand use? What language is off-limits?
Escalation paths: When should it recommend a resource, a tool, or a human instead of answering?
One formatting trick that actually works: For rules that are truly non-negotiable, write them in ALL CAPS. It sounds aggressive in isolation, but it works. The model reads formatting signals. “NEVER recommend a competitor product” lands harder than “try not to mention competitors.” Use it for your three to five most critical behavioral guardrails.
Examples:
❌ Write professional emails to clients.
✅ You are a B2B sales rep at a SaaS company. Tone: confident, concise, no buzzwords. NEVER use the word "synergy." Format: Subject line, three short paragraphs, clear single CTA. ALWAYS end with a specific next step, not a vague "let me know."
Budget 10-15 hours of system prompt iteration before you call a GPT production-ready. That’s not a typo. Test against normal cases, edge cases, and adversarial inputs — the kinds of things a skeptical user or an off-script question will throw at it.
Layer 3: Knowledge files (what makes it yours)
Without knowledge files, you’ve built a custom-named version of standard ChatGPT. The knowledge layer is what gives your GPT institutional memory: the brand voice, the internal frameworks, the context that doesn’t exist anywhere on the public internet.
What to upload:
Brand voice guides and style examples.
Internal process docs and frameworks.
Competitor positioning notes.
Product one-pagers and FAQs.
Past high-performing examples of the output you want.
File format matters. Plain text (.txt) and Markdown (.md) outperform PDFs for retrieval accuracy. Never dump a raw 500-page document. The model can’t efficiently parse messy formatting or irrelevant context.
The cheat sheet rule: If a source document is longer than 20 pages, use AI to distill it into a focused, five-to-10-page summary specifically for the GPT to reference. Shorter, curated context outperforms raw data dumps every time.
The transcript trick most teams miss: If your company has recorded webinars, training videos, or internal demos, those transcripts are ready-made knowledge files. Open the video on YouTube, click “Show transcript,” toggle off timestamps, copy the full text, paste into a Google Doc, and download as .txt. A 45-minute video becomes a high-quality knowledge source in about 10 minutes.
Layer 4: Capabilities (enable what you need. Nothing else.)
There are three built-in toggles: Web Browsing, Code Interpreter, and DALL-E. Don’t enable them all “just in case.” Each one adds surface area for the model to go off-script.
Capability
Enable when
Skip when
Web Browsing
GPT needs live data: prices, news, current URLs
GPT should only draw from your uploaded knowledge files
Code Interpreter
Users will upload CSVs, run analysis, generate charts
GPT is purely text-based
DALL-E
GPT creates visual assets as part of the workflow
GPT is analytical or copy-focused
Code Interpreter is the most underrated of the three. A GPT with it enabled can accept CSV uploads, run analysis, generate charts, and return downloadable files, replacing hours of manual reporting. If any part of your workflow involves structured data, this is worth experimenting with.
A note on web browsing: Web-enabled GPTs will confidently pull and present outdated or wrong information. If accuracy is important, disable web browsing entirely and rely only on your curated knowledge files. You control what’s in them. You can’t control what the web returns.
Layer 5: Actions (one integration for V1)
API connections to external systems — CRMs, project management tools, databases, calendars — are where GPTs start to feel like real automation infrastructure rather than fancy chat interfaces.
For V1, connect exactly one integration. Not five. Scope creep at the actions layer is where GPT projects stall before launch. Pick the single integration that would deliver the most immediate value, typically where the GPT’s output currently has to be manually copied somewhere else.
Layer 6: Evaluation (test before anyone else sees it)
Write five to 10 test questions before you share the link with anyone. Include normal cases, edge cases, and at least two adversarial inputs, the kinds of questions a frustrated user or an off-topic request would generate.
❌ Hello, what can you do?
✅ Here is a furious customer email accusing us of fraud. Draft a response using our de-escalation framework without admitting liability.
Test cases should reflect the hardest version of the job, not the easiest. If the GPT can handle the edge cases, the normal cases will be fine.
The department playbook: Highest-ROI opportunities by team
Start with the department that complains most about repetitive work. Their pain is your adoption fuel. A GPT that eliminates a universally-hated task markets itself through word-of-mouth faster than anything you could announce in a Slack channel.
Marketing
Campaign copy assistant: Input one brief. Receive ad copy, email subjects, and social captions formatted by channel. Upload your brand guidelines as the knowledge file. This replaces 30-45 minutes of copy concepting per campaign.
Semrush integration opportunity: Feed in keyword data from Keyword Magic Tool to ensure copy is aligned with how your audience searches.
Competitor messaging analyzer: Paste competitor copy or a landing page URL. Get a structured summary of their positioning, the gaps they’re ignoring, and angles your brand can own.
Semrush integration opportunity: Pair with Traffic Analytics data to qualify which competitors are worth analyzing by actual share of voice.
If you want to skip the build and get competitive intelligence right now, Marketing Research & Competitive Analysis handles exactly this workflow out of the box. Drop in a competitor and get a structured SWOT, positioning gaps, and audience breakdown in a single conversation.
SEO
Content brief generator: This turns a keyword into a structured brief covering audience, search intent, recommended outline, and competitor content gaps. It replaces 30-45 minutes of manual brief writing per piece. At 20 briefs per month, that’s 10 to 15 hours returned to your team.
Semrush integration opportunity: Build the brief template around Semrush’s SEO Content Template output. The GPT populates the strategic rationale, Semrush provides the keyword and competitive data.
Technical SEO audit assistant: Paste a page’s content and meta information. Receive a prioritized fix list with title tag rewrites, internal link suggestions, and schema recommendations formatted exactly the way your team tracks them.
Semrush integration opportunity: Pull the audit inputs directly from Semrush’s Site Audit exports.
If you’re already using ChatGPT for SEO work, our collection of SEO prompts for ChatGPT is a good starting point for building the system instructions for either of these GPTs.
Sales
Prospect research brief: Input a company name. Receive a pre-call brief with recent company news, likely buying signals based on firmographic patterns, and tailored talk tracks for the likely objections.
A sales rep I worked with spent 20 minutes per prospect doing this manually before every cold call. The GPT produces the equivalent brief in 90 seconds. That means he spends his actual working hours on the only part that earns commission: the call itself.
Win/loss analyzer: Upload anonymized CRM deal notes. Surface patterns in why deals close or fall apart: which objection categories are fatal, which talk tracks correlate with wins, where in the funnel deals die.
Customer support
Ticket response drafter: Paste a customer ticket. Receive an on-brand draft response using your de-escalation framework. Rep reviews and sends in three minutes instead of 12. At 30 tickets per day, that’s 2.5 hours returned to a support rep’s day.
Policy Q&A bot: Upload your HR handbook or policy documentation. This will answer common employee questions instantly, reducing the repetitive Slack messages that eat 30-60 minutes from HR and ops leads per week.
Operations
OKR reviewer: Paste a team’s OKRs and get scores and rewrites. Are the objectives inspiring? Are key results actually measurable? Enforces rigor at scale without requiring a senior leader to manually review every team’s draft.
Meeting structurer: Input a topic and attendee list. Output a tight agenda with pre-reads, decision points, and follow-up templates. For organizations where meeting bloat is a recognized problem, this one tends to spread fast.
How to prevent your GPT from making things up
Hallucination (the model generating confident-sounding incorrect information) is the single most-cited concern from teams considering custom GPTs. It’s a manageable risk if you build correctly.
Add an explicit guardrail sentence in your instructions. Something like: “If you do not know the answer from the provided knowledge files, say so directly. Do not invent information. Direct the user to [specific resource] instead.” Simple. Effective. Dramatically reduces the instinct to fill gaps with plausible-sounding fabrication.
Disable Web Browsing when accuracy matters. A web-enabled GPT will pull and confidently present outdated, incorrect, or hallucinated source material. If your GPT’s value depends on accuracy, including policy Q&A, compliance guidance, and product specs, turn off Web Browsing entirely and rely only on the knowledge files you’ve curated and can verify.
Test for it systematically before launch. Ask your GPT questions you already know the answers to. Ask it something outside its defined scope. Ask an edge-case question that isn’t covered by your knowledge files. If it confidently fabricates rather than saying “I don’t know,” fix the instructions before anyone else encounters it.
The tighter the scope, the lower the hallucination risk. This is another reason the one-job rule isn’t just about UX. It’s about accuracy. A GPT that knows it’s only supposed to answer questions about your return policy has far less surface area to go off-script than one configured as a general business assistant.
How to launch so your team actually adopts it
Building the GPT is half the job. The failure mode most teams hit isn’t a bad build. It’s a bad launch. A GPT nobody can find is a GPT nobody uses.
Phase 1: Build
Define your one-sentence purpose. Write layered instructions with examples. Upload focused knowledge files. Configure one API action maximum for V1. Resist the urge to expand scope.
Phase 2: Test
Create five to 10 golden test questions. Run a pilot with three to five real users. Don’t send them a link and walk away. Watch them use it, note where they stall, and iterate two to three rounds before wider release. The feedback from watching someone use your GPT for the first time is worth more than any amount of solo testing.
Phase 3: Launch
Write your GPT store or sharing copy around the outcome, not the technology. “Save 45 minutes on every content brief” outperforms “an AI-powered SEO assistant.” Add four conversation starters that showcase different use cases immediately. Users who see specific options to click engage at a significantly higher rate than those staring at a blank input field with no idea where to start.
Phase 4: Promote
Record a two-minute Loom showing a before/after on the specific task the GPT replaces. Share through your team Slack with that before/after story, not a feature list. Create a one-page “prompt pack” with the 10 highest-value starting prompts for your GPT.
The discoverability principle: Pin your GPT in the team Slack channel. Add it to onboarding docs. Demo it at the next all-hands. If someone can’t find it and understand what it does in five seconds, they won’t come back after the first session.
Measuring what actually matters
Tracking total conversations is the floor, not the ceiling. Here’s what actually tells you whether your GPT is working:
Metric
What it tells you
Target
Return rate
Once is curiosity. Twice is value. Weekly is a habit.
50%+ returning after first use
Conversation depth
Turns per session; longer = higher utility
4+ turns average for complex tasks
Time saved per use
Survey users or compare task completion times
30-70% reduction vs. manual
Team adoption rate
% of target users engaging weekly
60%+ within 30 days for internal GPTs
Downstream action rate
Are users taking the next step you wanted?
Defined per use case
The ROI one-pager: Hours saved per use × frequency per week × team size × average hourly cost = monthly dollar value. Build this at the 30-day mark. It’s the most powerful artifact you have for justifying continued investment, or making the case for the next GPT.
Where most B2B teams are right now
Organizations fall into one of five stages:
Exploring: Team members use ChatGPT ad hoc. No shared GPTs exist.
Experimenting: One or two people have built a custom GPT. Usage is informal and person-dependent.
Standardizing: Three to five GPTs are deployed with proper instructions, knowledge files, and evaluation criteria. This is where shared value starts to compound.
Scaling: GPTs are integrated into defined workflows across departments. Usage is tracked. Iteration is systematic.
GPT-Native: GPTs are the default starting point for designing new workflows, not an afterthought.
Most B2B teams are at Level 1 or 2. The biggest ROI jump happens between Level 2 and Level 3. That’s the moment GPTs stop being personal productivity experiments and start becoming team infrastructure.
Custom GPTs are a workflow infrastructure decision. It compounds over time when scoped correctly, and quietly disappears when it isn’t.
The teams getting real ROI from them aren’t building the most technically sophisticated versions. They’re building focused ones: scoped to one job, launched with enough intentionality that their team can actually find and use them, and iterated based on real usage data, not assumptions.
Start with the task your team complains about most. Score it against the framework. If it hits 12 or above, you have your answer.
Build it this week. Run it for 30 days. That’s when it gets interesting.
Ready to build your GPT? Start with a blueprint
The GPT Blueprint Generator on Thinklet walks you through the validation framework above, generates a custom system prompt for your specific use case, and outputs a ready-to-paste knowledge file, all in one session. It’s built specifically as the hands-on companion to this guide.
Or, if you want to see what a well-built GPT feels like before you commit to building one, start here:
There’s no such thing as “too much information” in AI search. The more detail you provide, the less likely your business is to be replaced by third-party sources — or left out entirely.
With the rise of AI search, we know users want answers, and they want them fast. Google Maps has Know before you go and Ask Maps about this place (not to be confused with Ask Maps, the new conversational “AI Mode” in Google Maps), both AI features that let users easily find information about a place without visiting their website or social media.
Merchant Center added a new feature, Business Agent, that allows shoppers to chat with brands. Business Agent pulls from the business’s product information and website to answer users’ questions.
The best way sites can prepare for the continued rollout of features like this is to ensure FAQ content based on customer research (not just standard SEO research) is top of mind.
Why FAQs power answers in Google’s AI features
Ask Maps about this place offers preloaded questions and lets users ask their own. If it can’t answer, it responds, “There’s not enough information about this place to answer your question, but you can try asking another question.”
It’s a basic Q&A feature right now, but we can reasonably expect this to become more conversational in the future. With the Q&A feature being deprecated on GBPs, this is the replacement. If there isn’t information available for the AI to pull from, you’re leaving users in the dark.
This doesn’t mean you should have Q&As on every page or grab every People Also Ask question from an SEO tool and use it as-is. It’s not very strategic, and those questions likely just reflect search volume.
So what about the questions that don’t have national search volume? Or the questions that are highly specific to a region or location and their considerations? Think Victorian homes or specific city insurance laws.
To craft an FAQ strategy that can provide helpful information to both AI features and people, you’ll need two things:
Think outside the box of regular FAQs you’ll see across all businesses and SEO tools.
Be consistent in how you answer these questions across platforms (website, social media, and third-party review sites like Yelp).
Most businesses write FAQs based on whatever a tool tells them customers want to know (which is usually based on national, not local, data). The best way to get started is by re-evaluating your FAQ content.
Where does it live? How many places are FAQs answered? Consider all the places your audience is and where they’re likely to ask questions or engage with your content.
You should also open up Google Maps and check whether there’s an Ask Maps about this place feature on your own or your competitors’ GBPs. Take note of the questions Ask Maps about this place recommends, and write down any that remain unanswered.
You can work with the client’s social media team to ask which questions they receive most frequently. Social media managers will have the most insight into the types of questions they’ve answered in comments or DMs. If you can work with them and get this information, do it.
You can also just visit the client’s social media accounts and review their content. You’ll want to look for direct questions people are asking in the comments, and also think about the types of questions people might ask based on the content being posted.
NakedMD is a medspa chain across the U.S. that regularly posts content on TikTok. They posted a before-and-after video for lip injections.
One of the comments is someone asking if they also offer dissolving services, and if you visit their site and search for “dissolver,” nothing pops up. They also didn’t respond to the comment, but based on watching other people’s TikToks about their experiences at NakedMD, they can dissolve filler.
Unfortunately, I only found out they dissolve filler from a negative TikTok review of their services. This is an opportunity to make sure they create content about this on the website and social media. It will allow NakedMD to control the narrative about dissolving filler vs. letting potential customers know they’ve only done it when clients were unhappy with the results.
Another example of FAQ content from social media is posts that could leave users confused or make them want to know more. This TikTok asked staff to choose Xeomin or Dysport — that’s it. All the staff members chose Xeomin, but there wasn’t any follow-up on why. Content like this provides another opportunity to ensure these follow-up questions are answered.
Start with the client’s social media accounts to find FAQ opportunities. Also, check out competitor social media accounts and general Reddit posts about your client’s products or services.
Call transcripts and reviews are your direct line into how customers feel about a client:
With transcripts, you’ll be able to read and hear the questions customers are asking.
With reviews, you get to read exactly what the people who feel strongly about your clients’ services or products think.
Both of these datasets offer insights into customers’ pain points and priorities. Use both the strengths and weaknesses identified from the transcripts and reviews to create FAQ content.
Let’s say you’ve noticed reviewers mention the words “emergency,” “middle of the night,” and “Sunday” often. Customers are happy that a home service provider is available for their emergencies, no matter the day or time. Make sure the site’s content aligns with what users are saying. Maybe it’s including “24/7 emergency service, 7 days a week” as an H2 on the homepage, and using it as a selling point on service pages. If there was ever any question about your client’s service hours, having it mentioned on pages is an implicit way of answering that.
While that’s a simple example, it’s still an easy way to think about how you can use this data to answer potential questions without having to write in literal FAQ format.
Google is pulling from your on-site content to feed AI-driven answers. While the FAQ format may be best for some questions, it isn’t the only format that will work.
Consistency across platforms
While reviewing existing FAQs, ensure consistency across platforms. If a client is answering a question one way on the website and another way on Yelp, how can someone tell what the real answer is? Inconsistent answers confuse people and LLMs.
As Jason Barnard recently wrote, AI platforms generate responses by sampling from a probability distribution that is influenced by the model’s knowledge, its confidence in that knowledge, and the information retrieved at the time of the query.
When an AI system encounters the same information across multiple trusted sources, it becomes more confident in it. On the flip side, if it finds conflicting information or only discovers the answer in one location, its confidence diminishes.
Make sure to include an FAQ review process in your workflow. Regularly audit and flag information related to hours, pricing ranges, availability, and service offerings for frequent review. These areas tend to change the most rapidly, and having outdated information can significantly harm customer trust.
While having an FAQ strategy in place isn’t anything new, the importance of it and the approach have shifted. With the rise of AI features like Ask Mapsabout this place, it has placed a stronger emphasis on structured, consistent, and explicit service or product and pricing information.
Review FAQs wherever they may exist and audit for consistency across all digital touchpoints. This will help you prepare for the changes coming to Google Maps and Google Business Profile overall.
AI search often fails to identify which Spanish-speaking market it’s serving. Instead, it blends regional terminology, legal frameworks, and commercial context into a single response, creating answers that don’t map to any real market.
The result is answers that mix multiple countries into something no user can actually use. This is the “Global Spanish” problem.
How AI turns ‘correct’ Spanish into useless answers
Ask a chatbot in Spanish how to file your taxes — cómo puedo declarar impuestos — and watch what happens.
The response is grammatically perfect, well structured, and seemingly helpful. Then, in a single bullet point, it casually lists “RFC, NIF, SSN, según país” — Mexico’s tax ID, Spain’s tax ID, and America’s Social Security Number — as if they were interchangeable items on a shopping list.
Chatbot response to “cómo puedo declarar impuestos” showing RFC/NIF/SSN mixed in a single answer
To be fair, it’s improving — early models would confidently give you Mexico’s SAT filing process when you were sitting in Madrid, no disclaimer attached. Now they hedge. But hedging by dumping three countries’ tax systems into a single bullet point isn’t localization. It’s surrender dressed up as thoroughness.
The model still can’t determine which Spanish-speaking market it’s talking to, so it defaults to a vague, one-size-fits-none answer that serves no user well. It’s the AI equivalent of a waiter asking a table of 20 people, “What will you all be having?” and writing down “Food.”
If your AI answers a Mexican user with Spain’s tax logic, you don’t have a translation problem. You have a geo- and jurisdiction-inference problem. And in AI-mediated search, that inference is now the foundation on which everything else sits.
Traditional search had these same issues. Google has spent years building systems to handle regional intent, geotargeting, and language variants — and still doesn’t get it right every time.
The difference is that generative AI removes the safety net. Instead of 10 blue links where users can self-correct, you get one synthesized answer. And that answer either lands in the right country or it doesn’t.
Spanish isn’t one market, it’s 20+ — and ‘neutral’ is not neutral
Most Americans hear “Spanish” and imagine a language toggle. Hispanic markets don’t work like that.
Spain and Latin America don’t just differ in slang. They’re distinct in what decides whether a page converts, whether a brand is trusted, and whether an answer is even legally usable.
For example, there are clear differences in the following:
Regulators (Hacienda vs. SAT).
Legal terms (NIF vs. RFC).
Currencies (EUR vs. MXN).
Formatting (period vs. comma decimals).
Tone and social distance (tú/vosotros vs. usted/ustedes — get it wrong and you’re instantly an outsider).
Search intent(the same query can map to different products or categories, depending on the country).
Every international SEO knows these differences matter — they affect everything from indexing to conversion. In generative search, they become decisive.
The model doesn’t show 10 blue links and let the user decide. It collapses the SERP into a single synthesized answer and chooses what counts as authoritative. If your context signals are ambiguous, the model improvises. That’s where “Global Spanish” is born.
Linguists have a name for this: “Digital Linguistic Bias” (Sesgo Lingüístico Digital), documented by Muñoz-Basols, Palomares Marín, and Moreno Fernández in Lengua y Sociedad.
Their research shows how the uneven distribution of Spanish varieties in training corpora produces chatbot responses that ignore specific dialectal varieties and sociocultural contexts. The bias is structural — baked into the training data itself.
Spain represents a minority of the world’s Spanish speakers, yet it’s often overrepresented in the digital corpora and institutional sources that shape what models “see” as default Spanish.
Meanwhile, many Latin American markets remain comparatively underrepresented in AI investment and data infrastructure. Latin America received only 1.12% of global AI investment despite contributing 6.6% of global GDP.
The result is predictable: The model’s most confident Spanish tends to sound geographically specific — even when the user didn’t ask for that geography. LLM models are trained on whatever web data is most available, and that data skews heavily toward certain geographies.
In practice, this means a well-written product page from a Mexican SaaS company competes for model attention against decades of accumulated Peninsular Spanish web content and often loses.
Marketers created “neutral Spanish” as an efficiency shortcut, and LLMs treat it as a standard — one that breaks down at scale.
How LLMs break Spanish: 3 failure modes that matter for SEO
The cultural blind spots cluster into three predictable failure modes, each with direct consequences for search performance, trust, and conversion.
1. Dialect defaulting: The most visible failure
When an LLM generates Spanish, it gravitates toward a default variant — usually Mexican for vocabulary, sometimes Peninsular for grammar. It doesn’t announce the choice. It just picks one and presents it as “Spanish.”
Will Saborio demonstrated this concretely in 2023. Testing GPT-3.5 and GPT-4 with regionally variable vocabulary — “straw” can be pajilla, popote, pitillo, or bombilla depending on the country — ChatGPT consistently defaulted to the most globally popular translation, typically Mexican Spanish.
Even after explicit context-setting prompts (asking for Colombian recipes first), the model couldn’t be reliably localized.
A study evaluating nine LLMs across seven Spanish varieties confirmed the pattern at scale: Peninsular Spanish was the variant best identified by all models, while other varieties were frequently misclassified or collapsed into a generic register. GPT-4o was the only model capable of recognizing Spanish variability with reasonable consistency.
But dialect defaulting goes far beyond pronoun mismatch. It’s vocabulary (coche/carro/auto), product categorization (zapatillas/tenis), idiomatic expressions, formality register, and the cultural assumptions embedded in every sentence.
A product page that sounds like it was written for Spain signals to a Mexican user that the content wasn’t made for their market. In AI discovery, those signals compound. The model learns to associate your content with “outsider” markers and may select other sources for the answer.
(A nuance worth noting: This isn’t always binary. A Mexican luxury brand might deliberately use tú in certain contexts. The point isn’t rigid rules — it’s that the model should make intentional choices, not default ones.)
“The dialect defaulting problem” — diagram showing how one word maps to five different terms across Spain, Mexico, Argentina, Colombia, and Chile, with LLMs defaulting to one variant
2. Format contamination: The silent conversion killer
This one is invisible and arguably more dangerous. It’s not about words, it’s about numbers.
A documented issue in the Unicode ICU4X ecosystem illustrates the problem: Mexican Spanish (es-MX) uses a period as decimal separator (1,234.56), but if a system lacks specific es-MX locale data and falls back to generic “es,” it applies European formatting (1.234,56).
The number 1.250 could mean one thousand two hundred fifty or one-point-two-five-zero, depending on which locale the system defaults to.
If you’ve ever shipped a pricing page with the wrong currency symbol, you know the damage. (I have. It was a Black Friday landing page showing €49,99 to Mexican users who expected $49.99. Support tickets spiked before anyone in the office noticed.)
Now multiply that by AI summaries and assistants. The wrong market default propagates into product answers, generative search snippets, customer support scripts, and “recommended pricing” explanations.
3. Legal and regulatory hallucination: Where it gets dangerous
This is where “Global Spanish” becomes genuinely harmful. If you’re producing content in regulated verticals (i.e., finance, health, legal, insurance), it’s the kind of error that erodes the E-E-A-T signals that Google relies on.
Spain operates under the EU’s GDPR and its national LOPDGDD. Argentina has its Habeas Data law. Colombia has its own framework. Chile is updating its personal data legislation.
Mexico has its own federal privacy law, and as of March 2025, functions previously handled by the INAI have been transferred to the Secretaría Anticorrupción y Buen Gobierno.
An LLM that treats “Spanish-speaking” as a single legal context might answer a privacy question from Madrid by citing Mexican regulators, or advise a Colombian business on using Spanish consumer protection law. The output reads confidently — but legally fictional.
In YMYL verticals, this creates legal risk and may result in your content being excluded from AI-generated answers.
Geo-identification failures: When AI gets the country wrong, it gets the Spanish wrong
International SEO used to be a routing problem: Make sure Google shows the right URL. In AI-mediated discovery, the failure shifts upstream. If the system misidentifies geography, it retrieves the wrong market context. “Spanish” then becomes a coin toss between Spain’s defaults and Latin America’s realities.
Motoko Huntdescribes it as “geo-drift” — when a global page replaces a region-specific page in AI-generated answers. AI systems treat language as a proxy for geography, so a Spanish query could represent Mexico, Colombia, or Spain, and without explicit signals, the model lumps them together.
Hunt introduced the concept of “geo-legibility” — making your content’s geographic boundaries interpretable during traditional indexing and AI synthesis.
Her critical finding, echoed by practitioners across the industry: hreflang — already one of the most complex and fragile signals in traditional SEO, where it was always advisory rather than deterministic — appears even less influential in AI synthesis.
LLMs don’t actively interpret hreflang during response generation. They ground responses based on semantic relevance and authority signals.
Language match without market match
One example from her analysis makes the Spanish problem concrete. International SEO consultant Blas Giffuni typed “proveedores de químicos industriales” (industrial chemical suppliers) into a generative search engine.
Rather than surfacing Mexican suppliers, it presented a translated list from the U.S. — companies that either didn’t operate in Mexico or didn’t meet local safety and business requirements. The AI performed the linguistic task (translating) while completely failing the informational task (finding relevant local suppliers). That’s geo-drift in action: language match without market match.
The scale of the problem
Even within a single country, 78% of U.S. markets receive the same AI-generated recommendation list, regardless of local economic context, per Daniel Martin‘s analysis of 773 queries across 50 markets.
If this cookie-cutter pattern exists within English across U.S. cities, imagine the scale across 20+ Spanish-speaking countries with distinct legal systems, currencies, and cultural norms.
Semantic collapse: When localized versions disappear
Gianluca Fiorelli calls the endgame “semantic collapse” — the point where localized content versions become indistinguishable to AI retrieval systems, and the strongest version (usually English or U.S.-centric) absorbs the rest.
His framework maps three ways this plays out:
The AI retrieves from the wrong market.
It translates U.S. content into Spanish rather than using native sources.
It serves legal advice from one jurisdiction in another.
All three are happening in Hispanic markets right now.
The concept resonates beyond SEO. NeurIPS presentation “Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)” documents a broader pattern of output homogeneity: open-ended LLM responses are collapsing into the same narrow set of answers across major models — different labs, different training pipelines, same outputs.
If output diversity is shrinking globally, the prospects for preserving regional diversity in Spanish-language answers are sobering.
Why this matters now
These problems existed before AI Overviews. But the expansion of AI-generated search to Spanish-speaking markets is amplifying them at scale.
Google’s AI Overviews have expanded to Spain, Mexico, and multiple Latin American countries. The same Spanish-language AI summary can be served across geographies. If it was generated from “generic Spanish” content, it may carry dialect assumptions, formatting conventions, and regulatory references that may be incorrect for the user receiving it.
The crawl gap
Log file analysis by Pieter Serraris revealed a compounding factor: OpenAI’s indexing bots visit English-language pages significantly more frequently than non-English variants on multilingual sites.
Even when a site has properly localized Spanish content, the AI training pipeline may be systematically undersampling it, reinforcing the English-centric bias at the data ingestion level.
The tokenization tax
The Spanish word desarrolladorrequires four tokens while the English word “developer” needs just one, according to analysis by Sngular. A typical technical paragraph in Spanish consumes roughly 59% more tokens than the same content in English — higher API costs, reduced context windows, and degraded output quality.
A systemic cost on non-English content compounds across every interaction, creating an economic bias.
The self-reinforcing loop
The combined effect is predictable and vicious — the most-resourced market version (typically U.S. English) accumulates the strongest authority signals, gets retrieved more often, and progressively absorbs the localized versions. Spanish pages receive fewer retrieval opportunities, weaker engagement signals, and eventually become invisible to the AI.
The SEO shift: From ranking pages to shaping entity perception
We’ve entered a visibility model where being retrievable isn’t the same as being selected.
In generative search, what matters is whether the system sees you as authoritative for that context. The margin for error has collapsed. You’re competing to be included in a single synthesized answer.
A single Spanish site often underperforms because it doesn’t clearly signal a specific market. Generic Spanish signals low confidence, and models avoid it.
The next step is making that context explicit — so it’s clear where your content belongs.
Heidi Sturrock, a paid search consultant with 24 years of industry experience, joined me on a recent episode of PPC Live The Podcast. The episode covers a broad match mistake with an unexpected silver lining, and Heidi’s experience testing AI Max across 50+ accounts.
The broad match mistake — and the unexpected silver lining
Early in her career, Heidi ran a competitor conquest campaign for a high-spending B2B SaaS client using broad match — without adding negative keywords — and launched it on a Friday with a large daily budget. Over the weekend, the client’s call centre was flooded with angry calls from the competitor’s customers looking for refunds and tech support.
When Heidi called the client to own up, he surprised her by seeing it as an opportunity — training his sales team to handle the calls as soft pitches, offering switchers a 50% discount on their first month. The campaign was then split into two — one targeting disgruntled competitor customers, one for general competitor prospecting — giving better control over spend and intent.
The lessons: Don’t launch on a Friday, and know your stakeholders
Two clear lessons emerged from the story. First, never launch significant campaigns or budget changes on a Friday — the algorithm needs monitoring during its learning period and mistakes can compound unnoticed over a weekend. Second, always include all key stakeholders in client meetings.
Having both the entrepreneur and head of sales in the room meant everyone knew who to contact when things went wrong — and the entrepreneur’s visionary thinking turned a crisis into an opportunity.
Advice for when you’ve made a mistake
When something goes wrong, the first step is to stop the bleeding immediately — pause whatever is causing the problem rather than waiting for the algorithm to self-correct. Then call the client directly, own the mistake fully without deflecting blame, explain clearly why it happened, and come prepared with a solution and next steps.
Handling a mistake with honesty and accountability can actually build client trust rather than destroy it.
Common account mistakes that drive Heidi mad
Two mistakes come up repeatedly in audits. The first is attribution windows that don’t reflect the actual sales cycle — particularly for high-ticket or long-consideration products, where a short window starves the algorithm of conversion data and creates a cycle of frustration between client and agency. The second is fixating on secondary KPIs like CPC or CTR at the expense of the agreed primary goal.
If a campaign is hitting its ROAS target, a rising CPC is not necessarily a problem — the algorithm may simply be entering higher-intent auctions, and ten converting high-CPC clicks are often worth more than hundreds of cheap ones that don’t.
AI Max for search — what’s actually working
Heidi has tested AI Max across more than 50 accounts, with around two thirds seeing strong results and one third underperforming — typically due to insufficient historical data, conversion volume, or poorly defined targets. Her advice is to run it as an experiment first rather than switching everything over at once, and to treat the setup carefully — giving the algorithm the right first-party data, sensible targets, and constraints like landing page exclusions where needed. A step-by-step guide is coming to her blog soon.
One big takeaway
Don’t fight the changes coming to the industry — embrace them. The AI-powered features in Google Ads are genuinely powerful when set up correctly, and the marketers who take the time to master the new rules will be the ones who come out ahead.
Where to find Heidi
Heidi is active on LinkedIn and offers free guides at HeidiSturrock.com, including a free prompt for writing high-performing ad copy with LLMs. She’ll also be speaking on a panel at SMX Advanced in Boston in June. She will be part of the fully audience-driven Ask the Experts session. No scripts. No preset talking points. Just the conversations that matter most — driven by you.
Advertisers can now generate short videos directly inside Google Ads using Veo, Google’s most advanced generative video model — no video production required.
How it works. Upload up to three static images into Asset Studio and Veo generates videos up to 10 seconds long with natural motion, designed specifically for YouTube formats and audiences. These can then be turned into ready-to-serve ads using customisable templates.
What else it can do. Combined with Nano Banana, advertisers can adapt creatives further — swapping backgrounds, adjusting messaging, and tailoring content to specific audience interests.
The bigger picture. This follows Google’s earlier rollout of video templates and automatic video creation in Demand Gen campaigns, and represents the next step in Google’s push to make video creative accessible to advertisers of all sizes without dedicated production resources.
Why we care. Video consistently outperforms static creative on YouTube — but producing it has always required time, budget, and expertise. Veo removes most of that barrier, letting advertisers turn existing product images into polished video ads in minutes. For teams running image-heavy campaigns who have been unable to compete in video placements, this changes the equation significantly.
Early testing. Hop Skip Media founder Ameet Khabra shared some early results of the testing she did showing a video she created on LinkedIn. Her review is:
“Consumer product brands with clean imagery and inherent motion logic will get the most out of this”
The bottom line. As Google continues building AI creative tools directly into the ads platform, the gap between advertisers with production budgets and those without narrows. For anyone who struggles to get video production budget approved and have assets with inherent motion logic, now could be the best time to test AI-generated video in Google Ads.
Google is testing AI-generated summaries in YouTube feeds, replacing video titles with auto-written synopses.
Some YouTube users are seeing video titles replaced by AI-generated summaries in the Android app. Reports on Reddit showed title-less video cards with collapsible summary boxes instead.
The details. Video thumbnails remain, but titles are missing in some cases.
AI summaries appear in expandable text boxes beneath each video.
Users must tap to expand summaries to understand the content.
The test appears limited to YouTube on Android.
What it looks like. Here’s a screenshot Reddit user GrimmOConnor shared:
Why we care. This further abstracts creator metadata and reduces control over how your YouTube content appears. Titles remain a critical ranking and click-through signal. Replacing them with AI summaries can impact keyword targeting, brand voice, and intent matching — and increase the risk of inaccuracies that hurt performance.
Google confirmed a “small” and “narrow” experiment replacing original page titles with AI-generated versions in Search results.
According to Google, the goal is to better match queries and improve engagement.
But examples showed Google shortening or rewording headlines, changing tone and meaning.
Reaction. Early feedback suggests a worse browsing experience. Expanding summaries slows discovery and adds friction to content selection, which runs counter to YouTube’s engagement goals.
What’s next. There’s no official confirmation from YouTube on a broader rollout. The missing titles may be a bug, but the AI summary feature aligns with Google’s broader push into generative AI.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
Job Description Salary: $24 – $28 per hour Now Hiring: SEO Expert McCarthy Auto Group (Full-Time, On-Site) We’re looking for a driven and knowledgeable SEO Specialistto join our marketing team and help drive growth across 9 automotive brands and 4 collision centers. This is a full-time, on-site position, perfect for someone who thrives on optimizing […]
Benefits: Competitive salary Health insurance Opportunity for advancement Paid time off Training & development Local SEO Specialist / Product Team Specialist Company: Direct Clicks Inc. Job Type: Full-Time or Hourly Based on Experience Location: Remote Candidates must be located within driving distance of Roseville, Minnesota for monthly in-person team meetups. About Direct Clicks Inc. Direct […]
Who We Are: Reputation Management Consultants (RMC) is a reputation management firm used by high-profile individuals, professionals, executives, politicians, leaders, celebrities, individuals, and companies of all sizes, including Fortune 1000 companies and governments. Our performance-driven team consists of former journalists, communicators, marketers, creative experts and more. The Role: A Reputation Management Team Leader & Lead […]
Job Description Work with a team that’s transforming the way local brands grow. As an SEO Specialist at Nexvel, you’ll play a critical role in driving real results for our clients by crafting and executing innovative SEO strategies. We don’t just optimize websites—we develop cutting-edge digital strategies that help brands get found, generate leads, and […]
About the Role Reporting to the Performance Marketing Lead, the Executive Director of SEO/AEO will oversee TrendyMinds’ SEO/AEO service line with support from the previous director, who is transitioning to a new role within the agency. This is a leadership position for a seasoned SEO professional who is ready to take ownership of the service […]
Job Description Salary: $21.27-$25.85 hourly The Marketing Specialist Acquisition will play a key role in overseeing and managing Syntrios digital marketing efforts to drive awareness, engagement, and conversions. This role is responsible for overseeing and executing integrated campaigns across several online channels. Key Responsibilities: Implement, monitor, and improve PPC campaigns (Google, Bing, etc.) Plan and […]
Why USA Clinics Group? Founded by Harvard-trained physicians with a vision of offering patient-first care beyond the hospital settings, we’ve grown into the nation’s largest network of outpatient vein, fibroid, vascular, and prostate centers, with 170+ clinics across the country. Our mission is simple: deliver life-changing, minimally invasive care, close to home. We’re building a […]
Description Lightburn is hiring a Search Optimization Manager to drive the research, strategy, and execution of SEO and emerging search optimization needs. This role combines deep analytical research with hands-on optimization to improve visibility, discoverability, and performance for a wide range of clients. You’ll own SEO and GEO strategy for clients by conducting competitive analysis, […]
Job Description The intern supports strategic projects within the Organic Growth and Digital Strategy teams. These projects include tracking brand visibility across Large Language Models (LLMs), assisting with AI-driven search analysis, documenting organic discovery trends, and supporting brand authority research across search engines and social platforms. This opportunity provides a unique, “behind-the-scenes” view of how […]
Benefits: Competitive salary Health insurance Opportunity for advancement Paid time off Training & development Digital Marketing Specialist (SEO Focus) Company: Direct Clicks Inc. Job Type: Full-Time or Hourly Based on Experience Location: Remote Candidates must be located within driving distance of Roseville, Minnesota for occasional in-person team meetups. About Direct Clicks Inc. Direct Clicks Inc. […]
Job Description Title: Paid Media Manager Company: Democratic Attorneys General Association, Inc. Location: Washington, DC Reports To: Deputy Director of Digital Fundraising Salary Range: $72,000-$80,000 Purpose The Democratic Attorneys General Association (DAGA) seeks to hire a Paid Media Manager to execute DAGA’s digital advertising program. Key Responsibilities Partner with the Deputy Director of Digital Fundraising […]
To drive measurable lead generation and campaign efficiency through data-driven strategy, execution, and optimization of paid search campaigns across Google Ads and other performance platforms. Overview Intermountain Home Services is seeking a Paid Search Analyst to manage and optimize paid search campaigns across our portfolio of residential service brands. In this hands-on role, you’ll execute […]
About Levin & Nalbandyan LLP Levin & Nalbandyan, LLP is a prominent Los Angeles law firm that is raising the bar on what is means to be trial lawyers. As trend setters in the legal space, we pride ourselves in delivering exceptional legal services while fostering a collaborative and inclusive work environment. As a modern-day […]
Director, Talent Solutions | Team Lead at 24 Seven Talent We are seeking an experienced Paid Search Manager to lead and elevate paid search marketing initiatives that drive business growth and customer acquisition. In this role, you will oversee the strategy, execution, and optimization of paid search campaigns across multiple platforms, ensuring maximum return on […]
Teamwork makes the stream work. Roku is changing how the world watches TV. Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we’ve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects […]
Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.
We’ve all seen the charts going viral on LinkedIn. They’re everywhere at this point. Multiple industry studies, even this research from Semrush, confirm that Wikipedia and Reddit are the top-cited domains across major LLM platforms — and CMOs are running with this data.
The response is predictable: Just search for any bottom-of-funnel (BOFU) software query, and you’ll find Reddit threads in the top-ranking positions. This is exactly why the market is currently flooded with “Reddit SEO” agencies:
Just stop.
Taking this macro context — or a few isolated, high-ranking SERPs — and pivoting your entire GEO strategy toward Reddit or Wikipedia is a massive strategic error for the majority of B2B brands.
Why CMOs are misguided by the Reddit hype
The algorithmic tide is running toward massive community forums and open-source encyclopedias. That shift is real — but how it’s being interpreted isn’t.
The charts driving this executive FOMO are mathematically accurate, but they’re strategically misguided. Applying them as a universal GEO playbook ignores why that aggregate data exists and why certain pages rank for high-intent queries.
Reddit is the primary target because it’s perceived as easier to influence. While the industry respects Wikipedia’s ironclad editorial guardrails, Reddit is often viewed as an open loophole.
This is a classic case of marketing whiplash, where teams abandon foundational principles to chase the shiny new object.
To understand why Reddit and Wikipedia are a high-effort, low-upside channel for the vast majority of brands, you have to look at the context executives ignore.
“Wikipedia, Reddit, and YouTube are heavily cited by LLMs because they are massive websites with a topical footprint that spans into a million different areas.”
By default, they’ll always get the most aggregate LLM citations.
High-ranking Reddit threads on BOFU queries can’t be reproduced
When you see a Reddit thread driving CTR for a specific BOFU software query, it’s tempting to view it as an SEO loophole that can be easily reverse-engineered. This is incorrect.
In reality, this is a scenario where the “voice of the customer” largely dictates who gets recommended.
This isn’t an SEO hack or a growth trick. It’s the culmination of years of actual human peer reviews and real discussion on a topic that has reached a definitive consensus. Your marketing team can’t microwave this historical, multi-year, authentic brand sentiment.
Claiming you need a Reddit or Wikipedia strategy because they are the most-cited domains overall is like claiming spaghetti carbonara is the most-eaten dish in Italy. Yes, it’s ubiquitous and popular, but just because it’s everywhere doesn’t mean you should put it on the menu at a high-end steakhouse.
The illusion of ‘hacking’ Reddit and Wikipedia for AI visibility
Even if you ignore the macro context and decide to aggressively pursue a Reddit or Wikipedia SEO strategy, you’ll quickly realize how LLMs actually process data.
Hacking them for AI citations is an illusion built on a fundamental misunderstanding of what LLMs are looking for. When you look at the mechanics of AI citations, two massive roadblocks emerge.
Historical consensus can’t be microwaved
Thirsty SEO agencies will frequently pitch Reddit marketing services, promising to generate hundreds of upvotes and comments to trigger LLM visibility. But the data shows LLMs don’t care about manufactured virality.
Up to 80% of Reddit threads cited by AI have fewer than 20 upvotes, according to Semrush. More importantly, the average age of a cited post is roughly 900 days. LLMs are surfacing historical, established consensus, not yesterday’s growth hack.
Wikipedia editors will just delete you
The exact same brutal reality applies to Wikipedia. A Princeton University study analyzing AI-generated Wikipedia content revealed exactly what happens when marketers try to “hack” the encyclopedia with generative tools.
Researchers found that when users utilized AI to create self-promotional pages for businesses, the articles were mathematically lower in quality, lacking proper footnotes and internal links.
The result?
Human moderators quickly identified the low-effort content, deleted the pages for “unambiguous advertising,” and actively banned users.
Paraphrasing destroys narrative control
Even if you successfully infiltrate a subreddit or a Wikipedia page without getting banned, you lose control over your product positioning. Benji Hyamnotes that Reddit mentions are typically too short and lack the depth necessary for an LLM to associate your product with a specific problem and solution.
The Semrush data also proves this: AI tools don’t quote Reddit word-for-word. They blend and paraphrase discussions (showing a semantic similarity score of just 0.53).
Your carefully crafted value proposition will be mashed up with random, anonymous user comments, or stripped down to dry, encyclopedic neutrality, diluting your brand narrative entirely.
Posting on Reddit isn’t an SEO strategy — it’s shouting through a bus window, hoping to join the conversation. At best, it’s a short-term tactic. At worst, it actively damages your brand.
The lack of ROI is only half the problem when it comes to building a Reddit or Wikipedia presence. The much larger issue is the active harm it can inflict on your brand’s image.
Brands that treat these platforms as loopholes for AI citations fundamentally misunderstand their architecture.
As Eli Schwartzpoints out, trying to replicate decades of genuine human conversation with templated brand messaging isn’t just ineffective — it’s a massive reputational hazard.
Reddit communities are aggressively moderated
Subreddits and wiki pages are policed by passionate human moderators and veteran Wikipedia editors. They’ve seen every variation of corporate infiltration.
A new account dropping a link, manufacturing enthusiasm, or violating Wikipedia’s strict conflict of interest (COI) guidelines is flagged, reverted, and banned almost immediately. Sometimes, this is accompanied by a public callout (featured on subreddits like r/hailcorporate), causing more brand damage than the campaign was ever worth.
LLMs ingest deleted spam and banned accounts
This is the most critical and misunderstood risk. Reddit sells its data directly to companies like Google and OpenAI. Wikipedia’s entire edit history is completely open source.
LLMs aren’t just scraping the public-facing websites. They’re receiving the entire firehose of data (including deleted posts, reverted wiki edits, and banned accounts). When your agency’s fake comments or promotional product descriptions get removed by moderators, those AI models still see the manipulation.
Astroturfing creates a permanent negative trust signal
Because the AI models have full visibility into the moderation pipeline, links or mentions flagged as inauthentic carry negative weight. By attempting to game the system, you’re essentially training the AI to associate the brand with spam and coordinated manipulation.
Once you accept that hacking Reddit or Wikipedia is both ineffective and dangerous, you have to look at where LLMs are actually pulling their answers from when a buyer is ready to make a purchase. When you filter for high-intent, BOFU prompts, the “Reddit/Wikipedia is everywhere” narrative falls apart.
Using AI visibility platforms like Scrunch AI exposes Reddit’s and Wikipedia’s true influence on specific target categories. For one B2B client, tracking 300+ custom prompts generated thousands of LLM responses, but just two specific Reddit threads were responsible for the vast majority of citations.
The Wikipedia data was even more revealing.
For high-intent software queries, the encyclopedia barely registered. When AI tools cited Wikipedia, they were almost exclusively scraping broad, top-of-funnel category definitions, or pulling background facts from a specific company’s history page.
Data from Grow and Convert shows the same thing. For trucking software queries, LLMs consistently cited domains like PCS Software and TruckingOffice.
For project management queries, the AI cited specialized software review sites and niche blogs.
If you’re chasing platforms simply because they cover massive topical geography, you’re making a painful error. You don’t need to be visible everywhere. You only need to be visible in the specific digital neighborhood that influences your flagship category.
How to actually earn AI recommendations: Owned content and niche citations
Winning in AI search requires optimizing for targeted influence rather than aggregate metrics. The most effective GEO strategy abandons massive topical geography and focuses entirely on the pillars you can actually control.
Publish deep, human-written owned content
Your website remains your most powerful asset. To be recommended, you must provide the specific, granular depth the AI needs to understand your value. Your key product and solution pages need to explicitly cover:
Who the product is for.
How it’s used.
The specific pain points it solves.
Its core benefits.
This depth is exactly what gives you a chance at showing up for the highly specific, long-tail queries a customer types into an AI when evaluating products.
Execute targeted citation outreach
Use AI visibility tools to identify the specific, niche domains that currently influence your flagship categories. Once you know which industry blogs, review sites, and peer publications the LLMs are actually citing for your BOFU queries, execute targeted outreach to earn your place on those exact lists.
If you want a Reddit or Wikipedia strategy, respect their ecosystems
Reddit and Wikipedia carry real authority, and earning trust there is valuable independent of AI visibility. If you choose to invest in them, it must be a long-term play, not a marketing hack.
Engage authentically on Reddit: Answer questions, provide unique insights, and participate in discussions where your buyers actually hang out. Build street cred before recommending your own tools.
Build a branded subreddit for transparency: Create an official space for your team to share expertise, host AMAs, and answer product questions openly.
Monitor conversations for product insights: Use the platform to spot emerging pain points and shifts in sentiment before they hit traditional search engines.
Leave Wikipedia to the experts: If your brand genuinely deserves a Wikipedia page, it will be created by independent editors using reliable secondary sources. Don’t try to write your own product entry.
The path to AI visibility runs through your own domain and the highly specific digital neighborhoods your buyers trust. AI engines reflect the authority you already have. If you want the algorithm to recommend your brand, then you have to do the work to actually be recommendable.
Just six weeks after launching its ad pilot, OpenAI has hit a significant milestone — and the platform is still in its early stages of rollout.
The numbers.
Over $100 million in annualized ad revenue, generated from less than 20% of eligible US free and Go tier users seeing ads daily
Around 85% of Free and Go users are eligible to see ads — meaning the current revenue represents a fraction of the platform’s eventual ad capacity
More than 600 advertisers are now on the platform
What’s coming next.
Self-serve advertiser access is on track to launch in April
Geographic expansion into Canada, Australia, and New Zealand is being explored
OpenAI has hired former Meta ad executive Dave Dugan to lead ad sales
Why we care. ChatGPT’s ad business has scaled to $100 million in annualized revenue in just six weeks — and that’s from less than 20% of eligible users seeing ads today, meaning the inventory is about to get significantly larger.
Self-serve access launching in April is the moment this becomes accessible to the broader advertiser market, not just the 600+ brands currently in the managed pilot. Getting in early, before competition drives up costs, is the same playbook that rewarded early movers in search and social advertising.
The quality picture. OpenAI says fewer than 7% of ads are rated by users as “low relevance” — a metric the company says they are actively focused on improving alongside user trust.
The bigger context. Ads are a key part of OpenAI’s path to profitability ahead of an anticipated IPO. Executives have told investors the company expects to generate more than $17 billion from ChatGPT consumers in 2026 — with advertising representing a meaningful slice of revenue from its free user base.
The bottom line.$100 million in annualized revenue from less than 20% of eligible users in six weeks is a strong early signal. When self-serve access opens in April and the eligible audience expands, the numbers could scale quickly — and advertisers who have been waiting on the sidelines may soon find the platform harder to ignore.
Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.
We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.
The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.
It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.